2

I have a table with more than 60mio records, where every month I have to add about 2-4 milions. It's not insert only table, I need to read from it and update recently added rows (status changes, etc.). I have spotted some of my SELECT queries begun to ommit indexes because of number of rows to be searched using criterias. Therefore I want to archive records older than 6 months. Is there any good way to do this? Only thing I got in mind is manual copying data to archive db and remove at master. Perhaps there are some good practices or mechanisms to do it?

Michal

Michal
  • 163
  • 1
  • 7

1 Answers1

3

I recently answered similar question here: database-archive-solutions

You should read about postgresql table partitioning, if you have good way to split data in parts.

sufleR
  • 678
  • 7
  • 18