We use an ASPState database to persist .NET Session state on a SQL Server 2005 cluster. We are seeing some strange behavior during peak periods
The DeleteExpiredSessions proc is run every minute via an agent job. Sometimes this job is taking many minutes to run and delete expired sessions
Requests from the application to the ASPState database are very slow. I believe this is because there are exclusive locks being held on the table by DeleteExpiredSessions procedure
Code:
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE PROCEDURE [dbo].[DeleteExpiredSessions]
AS
SET NOCOUNT ON
SET DEADLOCK_PRIORITY LOW
DECLARE @now datetime
SET @now = GETUTCDATE()
DECLARE @tblExpiredSessions TABLE
(
SessionID nvarchar(88) NOT NULL PRIMARY KEY
)
INSERT INTO @tblExpiredSessions (SessionID)
SELECT SessionID
FROM [ASPState].dbo.ASPStateTempSessions WITH (READUNCOMMITTED)
WHERE Expires < @now
--EXPIRED SESSION LOGGING
/*
DECLARE @ExpiredSessionCount Int;
SELECT @ExpiredSessionCount = COUNT(SessionID)
FROM @tblExpiredSessions;
*/
IF @@ROWCOUNT <> 0
BEGIN
DECLARE ExpiredSessionCursor CURSOR LOCAL FORWARD_ONLY READ_ONLY
FOR SELECT SessionID FROM @tblExpiredSessions ORDER BY CHECKSUM(NEWID())
DECLARE @SessionID nvarchar(88)
OPEN ExpiredSessionCursor
FETCH NEXT FROM ExpiredSessionCursor INTO @SessionID
WHILE @@FETCH_STATUS = 0
BEGIN
DELETE FROM [ASPState].dbo.ASPStateTempSessions WHERE SessionID = @SessionID AND Expires < @now
FETCH NEXT FROM ExpiredSessionCursor INTO @SessionID
END
CLOSE ExpiredSessionCursor
DEALLOCATE ExpiredSessionCursor
END
--EXPIRED SESSION LOGGING
/*
BEGIN TRY
INSERT INTO DeleteExpiredSessionLog(RunStart, RunEnd, ExpiredSessionsDeleted)
VALUES (@now, GETUTCDATE(), @ExpiredSessionCount);
END TRY
BEGIN CATCH
--SWALLOW ANY EXCEPTION
END CATCH;
*/
RETURN 0
- The log file is filling up and in some cases forcing autogrowth, despite the db being in simple recovery
In addition to these, a trace reveals multiple requests for the same session coming in quick succession. For example exec dbo.TempResetTimeout @id=N'32gghltsuoesnvlzbehchp2m2014c0f1' 14 times in <1 second, so we're wondering about this as a cause or congestion, but not sure of the root of this behavior.
Any suggestions or explanation around this behavior would be appreciated.