views:

112

answers:

4

I'm new to threading and I came accross a custom thread pool implementation example in a blog. I'm pasting just the necessary parts of the code:

Public Class ThreadPool

    Private CountLock As New Object
    Private _Count As Integer

    Public ReadOnly Property ThreadCount() As Integer
      Get
         SyncLock CountLock
           Return _Count
         End SyncLock
      End Get
    End Property

    Public Sub Open()
      Interlocked.Increment(_Count)
    End Sub

    Public Sub Close()
      Interlocked.Decrement(_Count)
      ....
    End Sub

EndClass

My question is, why do we need a lock to implement the readonly ThreadCount property?

+1  A: 

It makes no sense, as there is no lock where you modify the property. Maybe the code was not using Interlocked operations before, and was using SyncLock even in Open / Close? In such case SyncLock would be really needed in the read access as well.

Suma
Are you sure it makes no sense? Adding locks adds a memory barrier which will prevent instruction reordering, might make a difference.
Lasse V. Karlsen
Prevent instruction ordering on a property like ThreadCount? I don't think so - it would be purely an informational/debugging property. Any real logic would be done with _Count.
wj32
@Lasse a lone read of an `int` is atomic in .NET. Re-ordering would only be a problem if the caller is also calling other members and there is enough inlining going on: it could be but would imply tight coupling that probably means the locking should be in the caller.
Richard
Could be, all I wanted to point out that the issue might not be as clear-cut as indicated. I sure don't know the answer.
Lasse V. Karlsen
@Richard, re-ordering wouldn't be a problem, but CPU cache could.
Jon Hanna
+2  A: 

This code should be using Interlocked.CompareExchange to access the value in the property getter. Set param3 (comparand) to something that you know cannot be seen in the variable, like Int32.MinValue, and then the function just returns the current value of _count.

If Interlocked operations are used for all accesses to the variable, the lock is redundant since all access via Interlocked class methods is atomic.

Steve Townsend
I think I missed something while learning about locks. Can you explain why lock would be necessary if he wasn't using Interlocked, but just a lock in Open/Close?
davsan
@davsan - Interlocked class uses platform-specific processor instructions to guarantee atomic behaviour when accessing variables of primitive type like `Int32` and `Int64`, or native pointers. If you can make sure all access to `_count` is via `Interlocked`, this will be the most efficient way to handle thread safety on `_count`. Any higher level lock will be more costly. Otherwise, get rid of the `Interlocked` usage and use `lock(_count)` everywhere - it does not make sense to mix and match.
Steve Townsend
@Steve - OK, I understood what you've explained. I think what I miss is here: afaik, lock (CountLock) { return _count;} limits that only one thread can execute the {return _count} section at one time, so what would be the disadvantage if multiple threads execute this same section at the same time? I think I'm unaware of read safety issues?
davsan
@davsan - this might work so long as _count is not used anywhere else. However I am not sure how reliable it is to mix Interlocked operations (which lock at the machine level) with non-interlocked (eg. protected using managed code lock()). I am not sure if an atomic read is guaranteed if you happen to get inside the managed code lock() while another thread is updating the underlying memory via `Interlocked.Decrement` or `Interlocked.Increment`. I would not mix these data protection types unless you are sure.
Steve Townsend
@Steve - oh, now I get it. thank you :)
davsan
@davsan - see also my question on this topic: http://stackoverflow.com/questions/3855904/interlocked-class-mixed-with-lock
Steve Townsend
+2  A: 

I have no idea why the author choose to use a lock in one part of the class while utilizing lock-free techniques in other parts. However, I can make an assumption that the author did it to create an explicit memory barrier on the read of the Interger. VB does not contain the equivalent of C#'s volatile keyword so that leaves just 4 other common methods for making the read safe. I have listed these in the order that I would choose for this specific scenario.

  • Interlocked.CompareExchange
  • Thread.VolatileRead
  • Thread.MemoryBarrier
  • SyncLock

The memory barrier is required to prevent the VB or JIT compilers from moving instructions around. The most likely optimization in the absence of a memory barrier is to lift the read outside of a loop. Consider this realistic use of the ThreadCount property.

Sub LoggingThread()
  Do While True
    Trace.WriteLine(ThreadPool.ThreadCount)
  Loop
End Sub

In this example the CLR would likely inline ThreadCount and then potentially "lift" the read of _Count and cache it in a CPU register before the loop begins. The effect would be that the same value would always be displayed.1

1In reality the Trace.WriteLine call itself generates a memory barrier that would cause the code to be safe by accident. The example was intended as a simple illustration of what could happen.

Brian Gideon
+1  A: 

The lock will force a memory barrier, so that a stale value from the CPU cache isn't read if the last value written was written by a different CPU. The same could be done with Thread.VolatileRead() without locking.

Jon Hanna