I want to add here that different databases require different strategies. Let's compare MySQL w/InnoDB and PostgreSQL for example.
InnoDB
InnoDB tables are basically a b-tree index of the primary key which are extended to include the row information in the index entry. Physical order scans are not supported and all scans happen in logical order. This means two things:
A sequential scan in Innodb generates a lot of random disk I/O, and
The primary key index must be traversed regardless of whether one is using a secondary index.
Primary key lookups are faster in this model than in any other approach.
In this case it is very important to index enough fields in multi-page tables. The typical rule is index everything you want to filter by.
PostgreSQL
PostgreSQL uses heap files, one table per file (some tables may be many files) where tuples are allocated from that heap's free space. Physical order scans are supported. For a logical order scan to work, an index must be added.
Primary keys in PostgreSQL are basically a subset of unique indexes where no values may be NULL. UNIQUE constraints are done using implicit indexes, and several other index types are supported with different operations possible in the index.
This means:
Primary key lookups, assuming a reasonably large tablerequire hitting an index file and a table file. This is significantly slower than MySQL's approach where the index only must be traversed and the row is contained in the index.
Physical order scans perform much better, reducing random disk I/O where significant numbers of rows are to be processed.
Secondary index scans perform better than MySQL's because only one index must be traversed to get to the physical portion of the table.
In this model, indexes are often necessary but the planner has more freedom when to use an index, and the implications of not using one are often less severe. The tables are more generally optimized (rather than specializing in pkey lookups) and so fewer indexes are required.
TL;DR
Know thy RDBMS.