Basic answers
Since you select a couple of big columns an index-only scan is probably not a viable option.
This code works (if no NULL values in data!)
While the column isn't defined NOT NULL, add NULLS LAST to the sort order to make it work in any case, even with NULL values. Ideally, use the clause in the corresponding index as well:
SELECT <some big columns>
FROM my_table_
ORDER BY when_ DESC NULLS LAST
LIMIT 1;
Without any index on when_ column, does this statement require a full
scan of all rows?
Yes. Without index, there is no other option left. (Well, there is also table partitioning where an index on key columns(s) is not strictly required, and it could assist with partition pruning. But you would typically have an index on key columns there, too.)
With an index on when_ column, should I change this SQL to use some
other approach/strategy of query?
Basically, this is the perfect query. There are options in combination with advanced indexing:
Advanced technique
Assuming a NOT NULL column. Else, add NULLS LAST to index and queries as suggested above.
You have a constant influx of rows with later when_. Assuming the latest _when constantly increases and never (or rarely) decreases (latest rows deleted / updated), you can use a very small partial index.
Basic implementation:
Run your query once to retrieve the latest when_, subtract a safe margin (to be safe against losing the latest rows) and create an IMMUTABLE function based on it. Basically a "fake global constant":
CREATE OR REPLACE FUNCTION f_when_cutoff()
RETURNS timestamptz LANGUAGE sql COST 1 IMMUTABLE PARALLEL SAFE AS
$$SELECT timestamptz '2015-07-25 01:00+02'$$;
PARALLEL SAFE only in Postgres 9.6 or later.
Create a partial index excluding older rows:
CREATE INDEX my_table_when_idx ON my_table_ (when_ DESC)
WHERE when_ > f_when_cutoff();
With millions of rows, the difference in size can be dramatic. And this only makes sense with a much smaller index. Just half the size or something would not cut it. Index access itself is not slowed much by a bigger index. It's mostly the sheer size of the index, which needs to be read and cached. (And possibly avoiding additional index writes, but hardly in your case.)
Use the function in all related queries. Include the same WHERE condition (even if logically redundant) to convince the query planner the index is applicable. For the simple query:
SELECT <some big columns>
FROM my_table_
WHERE when_ > f_when_cutoff()
ORDER BY when_ DESC
LIMIT 1;
The size of the index grows with new (later) entries. Recreate the function with a later timestamp and REINDEX from time to time with no or little concurrent access. Only reindex after a relevant number of rows has been added. A couple of thousand entries won't matter much. We are doing this to cut off millions.
The beauty of it: queries don't change.
Implementation with function to update the partial index automatically:
More general advice: