I am doing a batch update where I process records sequentially (User 1, User 2 etc). Each record has multiple update queries associated to it. I need to skip if there is a data issue for an individual record (user in this case). Ideally, I would like to process them in parallel, but haven't reached that level yet (lots of challenges).
Is it possible to do it such that:
- There is a single transaction.
- If there is a failure the statements associated with that record gets rolled back, without affecting others.
- Commit.
For example, I have 4 users in a CSV file. If 3 are good and 1 is bad, 3 should get committed (or aborted) atomically; 1 should get skipped with errors.
Observed:
do everything for user 1;
do everything for user 2;
--> if there is failure, it rolls back the *entire* transaction
do everything for user 3;
In fact, any error level >= 16 is rolling back the entire transaction.
Expected:
do everything for user 1;
do everything for user 2;
--> if there is failure, roll back this *block* only
do everything for user 3;
do everything for user 4;
It's a normal try-catch requirement in any programming language; however, couldn't see a SQL Server equivalent (involving transactions). I have read about checkpoints, but not sure if that's an option to consider.