6

Optimized Locking (2025-05-19) makes it sound like batching would be unnecessary if I were to, for example, try to delete 20 thousand rows from a million row table in one statement (DELETE FROM foo WHERE Status = 1). Is that true?

J. Mini
  • 1,161
  • 8
  • 32
Mark Freeman
  • 2,293
  • 5
  • 32
  • 54

4 Answers4

8

The point of batching listed in my article is to give up atomicity. i.e. break up the work into smaller chunks so that each chunk is atomic, but the whole operation is not atomic. The goal is to get better concurrent activity. Updates and other changes can happen to affected rows in between each chunk.

Based on the docs, SQL Server isn't giving up on atomicity with Optimized Locking. Exclusive locks are still held on the TID. Other transactions that want to modify rows will wait for that TID lock.

If that kind of waiting is awkward, then Optimized Locking does not solve the problem the way batching would.

Michael J Swart
  • 2,235
  • 5
  • 23
  • 32
6

Does the new locking optimization make batching unnecessary?

Batching isn't only useful for relieving lock contention.

Depending on certain circumstances, a resource intensive operation can be broken up into batches to give breathing room on the server in a controlled way. This would help reduce otherwise sustained resource contention issues (e.g. CPU, Memory, Disk pressure), to allow for other operations to be able to process more efficiently on the server.

J.D.
  • 40,776
  • 12
  • 62
  • 141
6

For me personally by far the most common case I have for needing to bulk delete rows is when they have been rendered obsolete and are no longer in use by the application (either by passage of time or by some other criteria).

In that eventuality then, by definition, I would not expect other concurrent attempts to access the rows that I am currently deleting and my major concern historically would have been lock escalation blocking access to other rows - with transaction log growth being another large concern.

Optimized Locking should entirely remove the lock escalation concern.

Regarding the transaction log angle a prerequisite for using the optimized locking feature is that Accelerated Database Recovery is enabled. This can also mitigate this issue too.

That isn't to say that batching isn't still needed but likely the batch sizes can be bigger than the typical numbers that were previously in use based on lock escalation concerns. See the Best practices for ADR for some considerations here (e.g. impact on PVS size and features that prevent aggressive log truncation)

20K rows isn't a huge leap up from typical batch sizes that would be in use even without these features so unless the row size is unusually large (e.g. lots of LOB data) I'd likely not bother batching this.

Martin Smith
  • 87,941
  • 15
  • 255
  • 354
5

In my experience, batching is very important to reduce the impact on transaction log growth. Usually a large amount of deleted rows causes the TLOG to grow because the deletes are fully logged even when the recovery model is simple. Therefore, batching is useful to free up space in the internal tlog at the end of each batch.

The batch quantity is usually 4000/4500 rows per batch to avoid lock escalation.

J.D.
  • 40,776
  • 12
  • 62
  • 141
MBuschi
  • 4,835
  • 1
  • 6
  • 17