1

The TLOG in my SQL Server 2008 is growing faster than usual. The DB has a Full recovery model and we have a TLOG backup every hour.

Can you please write me or refer me to link that describes a way to identify what is the session/process/query that causes the TLOG growth.

I thought that if I'll know WHEN the growth is occurring, I'll be able to detect the job/process that causes the growth. I've created a job that saves the output of sp_helpdb 'dbname' to a table, but even after knowing the right time, I wasn't able to find the sordid process.

I would appreciate your help in this matter.

Thanks in advance,

Roni.

Roni Vered
  • 585
  • 7
  • 17

2 Answers2

3

You can review events from the default trace to identify the growth events you care about:

DECLARE @path NVARCHAR(260);

SELECT 
   @path = REVERSE(SUBSTRING(REVERSE([path]), 
   CHARINDEX(CHAR(92), REVERSE([path])), 260)) + N'log.trc'
FROM    sys.traces
WHERE   is_default = 1;

SELECT 
   DatabaseName,
   [FileName],
   SPID,
   Duration,
   StartTime,
   EndTime
FROM sys.fn_trace_gettable(@path, DEFAULT)
WHERE EventClass = 93 -- log autogrow event
ORDER BY StartTime DESC;

Combined with a separate trace or extended event session you set up yourself (e.g. to capture all updates with > x duration or > y reads or writes) you can hopefully find some likely culprits around the same time from the same SPID. You won't be able to do that going back in time, of course, but you could set something up to help identify them in the future.

With that all said, your next instinct might be to fight with this process and constantly run up behind it and shrink the log file. This is not a good idea because, really, what's the point? You're going to shrink it to free up space temporarily, only so the same file can grow and fill up the space again? Please read this Q & A in full:

Aaron Bertrand
  • 181,950
  • 28
  • 405
  • 624
1

The answer will require more investigative work on your part.

The transaction log is exactly what is sounds like - a record of of all transactions and modifications made to the database. Unless your database is strictly being read and not written to, then your log will be continually increasing. How fast it will increase depends on what is being done, and more specifically how often.

It will grow faster during high volume periods, and may be smaller during off peak hours of use. Unless the transaction log has really ballooned, it growing bigger may not be an issue and the growth will probably not always be uniform.

However, if there are known periods of times or jobs being run, you can get a feel for how large it should be at the biggest from typical usage. Using a server side trace can show you what is going on and the frequency of transaction executions. There may need to be further questions asked from this - i.e. are big changes being done in batched transactions - but that depends on what you find out.

I generally set my t-log backups to happen anywhere between 15 and 30 minutes depending on how much volume the database sees. An hour is a long time and could be causing your log to grow unnecessarily large - try setting it to a half hour and see if this alleviates the problem.

After the backup it should be truncated, so the t-log file contents will be emptied out even though the file will stay the same size so as to not be constantly shrinking/enlarging a file which would cause excessive I/O. If the log is too big currently, you may need to shrink it using DBCC SHRINKFILE to get it back to a reasonable size (though this should not be a regular activity). You'll have more backups to contend with, so ultimately the space won't be solved, but it'll prevent your t-log from growing too huge at any one point in time.

Extra reading: Managing the size of the transaction log (Should apply to 2008 all the same)

LowlyDBA - John M
  • 11,059
  • 11
  • 45
  • 63