2

For the concurrency programs I have been writing in C#, my locks/synchronization tend to follow this pattern:

try
{
    Monitor.Enter(locker);
// critical region

} finally { Monitor.Exit(locker); }

This is a classic pattern I've discovered in one of Paul Deitel's C# books years ago, and have been following ever since. This has worked very well for many applications I've done that required concurrency. However, I recently had a discussion with another developer about concurrency and they asked me... why are you doing it that way when you could be using "scoped locks".

To this point I'll admit I had no idea what they were. I started doing some digging to figure out whether this was something that might be helpful to writing my applications. In my search I discovered this RAII (resource acquisition is initialization) pattern, which to my understanding allows for scoped locks. I guess that this pattern is predominantly used in C++ but there are some pseudo-implementations out there in C# such as the following (taken from Resource Acquisition is Initialization in C#):

using System;

namespace RAII { public class DisposableDelegate : IDisposable { private Action dispose;

    public DisposableDelegate(Action dispose)
    {
        if (dispose == null)
        {
            throw new ArgumentNullException("dispose");
        }

        this.dispose = dispose;
    }

    public void Dispose()
    {
        if (this.dispose != null)
        {
            Action d = this.dispose;
            this.dispose = null;
            d();
        }
    }
}

class Program
{
    static void Main(string[] args)
    {
        Console.Out.WriteLine("Some resource allocated here.");

        using (new DisposableDelegate(() => Console.Out.WriteLine("Resource deallocated here.")))
        {
            Console.Out.WriteLine("Resource used here.");

            throw new InvalidOperationException("Test for resource leaks.");
        }
    }
}

}

I am just trying to understand how this pattern produces synchronization in C#... Is the synchronization inherent to something having to do with resource allocation?

Also, isn't it kind of abusing the language to force a user to use a constructor w/using in order for this to work? What happens if something is cancelled and an exception is thrown in a constructor? I don't see where this is that much better than good old Enter/Exit pattern.

Glorfindel
  • 3,167
Snoop
  • 2,758
  • 6
  • 29
  • 54

2 Answers2

2

Your question looks a bit confusing to me, since both code snippets are not equivalent, but I guess what you really meant is the following: why should someone use something like

 Monitor.Enter(locker);
 using (new DisposableDelegate(() => Monitor.Exit(locker))
 {
     // critical region
 }

instead of the try/finally block from your question?

That is indeed a sensible question, because both ways are semantically equivalent, and there is IMHO no benefit from reinventing the wheel by such an DisposableDelegate class.

Of course, when you follow the link from the comment below the question you linked to, you find a better implementation of the "disposable delegate": the ResourceProtector, which lets your write code like

 using (new ResourceProtector(()=> Monitor.Enter(locker), () => Monitor.Exit(locker))
 {
     // critical region
 }

This has indeed some advantage over try/finally, since it groups the corresponding allocation/deallocation statements together. So the code becomes less error prone, especially when it is used in context of multiple resource allocations, other statements before and after the allocation, or when it is evolved.

Doc Brown
  • 218,378
0

I am just trying to understand how this pattern produces synchronization in C#... Is the synchronization inherent to something having to do with resource allocation?

The end brace (}) of the using block is an implicit call to Dispose.

The resource is acquired at the = (initialisation), and released at the end of that scope. The name RAII comes from C++, where scopes demarcate the lifetime of objects declared within them, and code in destructors runs however the scope is left (early return, exception thrown, control flows off the bottom)

Also, isn't it kind of abusing the language to force a user to use a constructor w/using in order for this to work?

It depends on what you mean by "abusing". This is C#'s only way of deterministically cleaning up

What happens if something is cancelled and an exception is thrown in a constructor?

The same thing that happens if an exception is thrown in the code before a try. Hopefully whatever that is, is also safe. I find (modern) C++ better in this regard, as it is easy to write a class that cleans up properly from a half-formed instance throwing.

I don't see where this is that much better than good old Enter/Exit pattern.

It's rather easy to define a warning for

CA0000 You created an IDisposable (MyClass myObj) and there is a code path that fails to Dispose it

rather than (hundreds of variations of)

CA0000 You called myObj.SomeSetup() and there is a code path that doesn't reach myObj.SomeTeardown()

Especially as there isn't a mechanism for an author of MyClass to indicate to the compiler that MyClass.SomeSetup must be paired with MyClass.Teardown

In the specific case of synchronising on an object, the lock statement is more sensible, but you could write something like

class ScopedLock : IDisposable
{
    private object obj;
    public ScopedLock(object obj)
    {
        this.obj = obj;
        if (this.obj != null) { Monitor.Enter(obj); }
    }
    public void Dispose()
    {
        if (this.obj != null) { Monitor.Exit(obj); }
        this.obj = null;
    }
}
Caleth
  • 12,190