-1

I am using postgree 9.4 database and I have a table snapshots with 700+ million rows and I want to delete about 9 million rows. I have tried different queries to delete records, but no success.

Query to delete all records for a specific camera:

DELETE FROM snapshots where camera_id = 1248

Another query to delete recodes between snapshot_id:

DELETE from snapshots where snapshot_id >= '1248_201511010000000' and <= '1248_20151130235959000'

Suggest me what will be the fastest way of deleting records from 700+ millions records.

azharmalik3
  • 101
  • 1
  • 1
  • 2

3 Answers3

0

Other than breaking the operation into chunks, I don't think there's a silver bullet method. This answer may come in handy

0

You can use something like pg_repack to keep the table clustered by camera_id, or snapshot_id whichever one is responsible is more problematic. This will make the marking of the rows as dead a much faster operation. It's probably better to use snapshot_id especially if that is chronological, and you're not inserting old snapshots.

On the other operation, you can use an index.

Evan Carroll
  • 65,432
  • 50
  • 254
  • 507
-1

Sometimes it is faster to create a new table and insert only the records you would like to keep.

alonk
  • 301
  • 1
  • 4
  • 11