1

I have a database where there are like billions of rows and I need to delete the data based on some condition.

Initially I was trying to delete the code in one transaction and log starts growing to the hard disk and fails, then I tried to do it in chunks.

SET NOCOUNT ON;

DECLARE @r INT;

SET @r = 1;

WHILE @r > 0
BEGIN
  BEGIN TRANSACTION;

  DELETE TOP (2000000) -- this will change
    Databasename.dbo.Tablename
    WHERE loading_time < '08/27/2014 4:00:00' 

  SET @r = @@ROWCOUNT;

  COMMIT TRANSACTION;
 -- printing if transaction is commited 
  print 'done'
  CHECKPOINT
-- shrinking file after transaction is comminted
  --DBCC SHRINKFILE (2, TRUNCATEONLY); -- CHECKPOINT;    -- if simple
  -- BACKUP LOG ... -- if full
END

Is this the right way to delete data and delete logs, in between user transaction because when I did it I get an error ;

Cannot perform a shrinkfile operation inside a user transaction. Terminate the transaction and reissue the statement.

Update I have change the code to include checkpoint, but even after adding check point, after 3 hours logs get filled and full rollback takes place. What should i do to delete ?

Ke7in
  • 113
  • 1
  • 4

2 Answers2

3

Don't shrink the log file - just ask yourself if you'll need to recover to part way through the process, and use this to decide whether to use Full or Simple.

If you're going with Simple, then checkpointing regularly will be enough. Leave the log file large and then consider what to do with it after the whole process. If you shrink it each time it'll only grow for the next iteration.

If you're going with Full, then do a lot of backups, figuring out how to get them off somewhere. Again, leave the log file big and figure out if you need to shrink it at the very end.

Edit based on comments below: making sure the database being checkpointed is the same one as has the growing log file is important too. :)

Rob Farley
  • 16,324
  • 2
  • 39
  • 61
2

I will focus on error message

Cannot perform a shrinkfile operation inside a user transaction. Terminate the transaction and reissue the statement.

The message is pretty much self explanatory you cannot perform shrink operation inside transaction. As already pointed IS THERE A GRAVE NEED TO SHRINK LOG FILE ?. What you want to achieve by shrinking log file, are you worried about the space transaction log is using. Delete operation is fully logged where by each row which is being deleted is logged into transaction log and so Transaction log file will grow. Shrinking is not a solution here. Shrinking of log also has performance implications as there is no instant file initialization for log files and when log files grow other transactions wait for this event. You have to find better way to manage transaction log space. There are two things you can do

1.Decrease the number of rows being deleted in TOP command. You can also use dummy code as below

WHILE 1 = 1
BEGIN

DELETE TOP (500000)
FROM Table_name;

IF @@ROWCOUNT < 500000 BREAK;

END

2.Take frequent transaction log backup specially after the above delete comand completes.

You have to ultimately balance the number of records which can be deleted without inflating transaction log much.

Even if recovery model is simple the logging behavior is not going to change much logs will be produces. A checkpoint statement would help you make sure logs are getting truncated automatically after transaction completes

Shanky
  • 19,148
  • 4
  • 37
  • 58