I'm new to the world of DB admin and have a large table (2 billion rows) of cryptocurrency tick data for multiple currencies in a Postgres database. Most of my queries will involve selecting a moving time window for each currency individually - for this, I currently have an index on my "timestamp" and symbol columns.
Querying appears to be pretty slow, around 100 seconds to query the last 2 days of data for 1 symbol (~130,000 rows) (I checked with explain analyze and the composite index is being hit)
Here's the kicker - I have the timestamps stored as Unix timestamps as int8 e.g. 1611751286078083
Will converting these to timestamptz, storing these in a new column and then using that as the index be worthwhile/ speed up query? I read that it's bad practice to create a new column with data that is available in another column already.
If so, is it likely that adding this new column will break my existing (constantly running) INSERT script that only inserts on the columns currently defined?
tldr;
Is an index on timestamptz noticeably more performant than on Unix timestamps stored as int8.
Index Scan using huobi_timestamp_symbol_idx on huobi (cost=0.58..438627.67 rows=177735 width=48) (actual time=0.041..101131.528 rows=488785 loops=1)
Index Cond: (("timestamp" > '1611751286078083'::bigint) AND (symbol = 'YFII'::text))
Buffers: shared hit=68973 read=219478
I/O Timings: read=99426.796
Planning Time: 13.925 ms
Execution Time: 101179.934 ms
I have 1 index on (timestamp desc, symbol) and the primary key is on (timestamp, market) and am using Postgres 12.3 hosted on AWS.