0

I have a closed source application which has the transaction log filling up on a daily basis to 60+gigs.

I have ran a trace on the database and no inserts and updates are occurring. The following statement though is being executed THOUSANDS of times per minute which doesn't seem right.

set transaction isolation level read uncommitted

If this is not the cause of the issue, is there anything else I can do to find the cause of this issue without turning off full backups as its required.

Thanks

2 Answers2

0

You can try to use some third party tools for transaction log reading, ApexSQL, Idera, RedGate. You can also try to determinate by using fn_dblog function, with all that executions you should be able to easily see the problematic statement:

SELECT [Current LSN], Operation, Context, [Transaction ID], [Begin time] FROM sys.fn_dblog (NULL, NULL)
Marko Krstic
  • 151
  • 5
0

First of all, you need to make sure your transaction log backups are running. If they are, you may want to look at increasing the frequency of backing them up.

If everything looks good on the log backup front, then run DBCC loginfo. It's very possible that a high percentage of your VLF entries are still active. This might be due problems in a database mirroring or replication setup. If you have either of these running make sure that the destination targets are online. If they are not, the transaction log file will not let go of pending transactions that are marked for mirroring or transactional replication.

Queue Mann
  • 552
  • 3
  • 8