3

At our organisation we have several non-Production environments where Developers run free and wild with TSQL code and databases. The DBA team do not normally monitor or maintain them. Recently several developers using one server have had to write procedures that throw around mountains of code such that their log files are growing to 10-20GB (on DBs approx 15-40GB) until they run out of space on the drive volume we have provided for the log files. The databases are all in Simple recovery mode and backups are (almost) never taken.

As a bandaid I've created a SQL Agent Job that can be run by anyone to shrink all user log files on the server.

What are some valid Log File Management strategies that might be used?

For example it is my understanding that as the users generally do a blitz of intensive work that checkpoints are probably being throttled back and that thus issuing manual checkpoints would see no advantage. Is that the case or should we in fact investigate adding manual checkpoints to their code?

And just to be clear it is log file space on disk we are interested in not space within the log file.

Paul
  • 1,453
  • 6
  • 19
  • 38

3 Answers3

3

Since your databases are in simple recovery mode checkpoint will be taken care of by database engine either automatically or when log file grow 70 % of its size, you can also give manual checkpoint . But still a long running transaction can hold log hostage and would not allow it to truncate even if you manually give checkpoint.

Log backup would not be possible in your case since recovery model is simple.

You can use Bulk logged recovery model to take benefit of bulk logged features supported.But non minimally logged transactions will be fully logged in bulk logged recovery model so be careful. Since you dont care about point in time recovery you can switch recovery model between full and bulk logged as required.

http://technet.microsoft.com/en-us/library/ms191244(v=sql.105).aspx

Laslty if your team is doing huge DML which generates lot of logs you would ask them to break there DML in chunks to avoid filling of logs. Index rebuild of huge table also generates massive logs.

Use truncate table to remove records from table( if you want to remove complete record) as it is mildly logged as compared to delete but dont use if table has Foreign key constraints and Indentity. http://msdn.microsoft.com/en-us/library/ms177570.aspx

Above points will surely give you some relief with huge logging.

Hope this helps

Shanky
  • 19,148
  • 4
  • 37
  • 58
1

You're uses have long running transactions. They need more drive space for the transaction logs. End of story.

mrdenny
  • 27,106
  • 2
  • 44
  • 81
0

"Throw more hardware at the problem" will not always solve theses kind of issues. What if Paul adds 100Go to his log space, and it still gets filled because of poor programming techniques? Also, adding more space may not be feasible, for example if his Servers are virtualized and the hardware running it is at capacity.

Given it is a developper's environment that isn't tuned/monitored, @Paul your solution to have alerts/triggers to fire an automatic maintenance seems a good and logical choice. But it is a proof that you need to make sure this kind of coding never makes it to your Production servers, and thus, either give/pay to your Developpers a "SQL optimisation techniques" class, and/or put in place a very strong code-review process. I would personnally go for both options.

Philippe
  • 517
  • 6
  • 15