views:

569

answers:

8
class Unit {
    private readonly string name;
    private readonly double scale;

    public Unit(string name, double scale) {
        this.name = name;
        this.scale = scale,
    }

    public string Name { get { return name; } }
    public string Scale { get { return scale; } }

    private static Unit gram = new Unit("Gram", 1.0);

    public Unit Gram { get { return gram; } }
}

Multiple threads have access to Unit.Gram. Why is it ok for multiple threads simultaneously read Unit.Gram.Title?

My concern is that they are referring to the same memory location. One thread starts reading that memory, so isn't it "locked out" then? Does the .NET handle synchronization for this critical section underneath? Or am I wrong in thinking that simultaneous reading needs synchronization?

+4  A: 

If the object is immutable its state will never change. Therefore the concerns of stale data go out the window. Thread reading is never locked in so this is a non-issue (deadlock)

Woot4Moo
+41  A: 

What makes an object not thread safe? An object is not thread safe if the value/state of that object can change while a thread is reading it. This generally happens if a second thread changes this object's value while the first thread is reading it.

An immutable object, by definition, cannot change value/state. Since every time you read an immutable object it has the same value/state, you can have any number of threads read that object with no concerns.

Starkey
You may add that when an immutable object is initialized, it's state *does* change, but the CLR guarantees that this process happens thread-safe (i.e., it is not possible that another thread tries to read while the init-process is not finished in the ctor or cctor).
Abel
+7  A: 

Simultaneous reads do not need synchronization. Since synchronization is only required for writers (or readers and at least one writer), immutable objects don't require synchronization, and are therefor threadsafe.

Gabe
+3  A: 

Concurrent reads do not require synchronization in the vast majority of cases (exceptions are things like memory mapped IO where reading from a certain address can cause side effects).

If concurrent reads did require synchronization, it would be close to impossible to write usefully multi-threaded code. To execute a piece of code for instance, the processor has to read the instruction stream, how would you write a lock function if the function itself had to protect against itself being executed concurrently :)?

Logan Capaldo
To be fair, at some level, the hardware does have to deal with these issues. But it's done in hardware and isn't even visible to the OS, to say nothing of the .NET application writer.
siride
@siride, I actually think I should accept your comment as the answer because it actually answers the question. **Synchronization is handled underneath.** After that it is easy to understand immutability IMHO.
randomguy
@randomguy, Well, _some_ hardware does have to deal with these issues at some levels, but it also introduces the needs in the first place at some levels as well. If you want to say it is "handled underneath" what you have to say is "on some computer designs where concurrent reads could cause problems due to design choices, the possibility of software being affected by this is eliminated through various mechanisms, other designs are not affected by concurrent reads, and in still others concurrent reads are impossible."
Logan Capaldo
@Logan, agreed. Nicely expressed.
randomguy
+1  A: 

Memory Management Units are the part of the processor that handles the reading of memory. If you have more than one, once in a blue moon 2 of them might try to read the same memory location in the same dozen nano-seconds, but no problems result seeing as they get the same answer.

CrazyJugglerDrummer
+1  A: 

Besides exceptions like mapped memory for drivers for instance, There is no problem for two threads reading simultaneously the same memory address. A problem may arise when one thread performs some writing data though. In this case, other threads may be prevented to read that object/data.

But the problem is not due to the simultaneity of the writings (at the lowest electronic level they occur one after the other anyway), the problem is rather that the object / set of data may lose their consistency. Usually one utilize a section critic to isolate some code that may not be read/written simultaneously by other threads.

There are many examples over the Net, but consider the following, with price is a private member of a class, say Product, which has also 2 methods

public void setPrice(int value) {
  price = value;
  // -- point CRITIC --
  price += TAX;
}

public int getPrice() {
  return price;
}

setPrice(v) sets the price of a product to v, and adjust it with VAT (the program should have value += TAX; price = value but this is not the point here :-)

If thread A writes price 100, and TAX is (fixed) 1, the product price will finally be set to 101. But what happens if thread B reads the price via getPrice() while thread A is at point CRITIC? The price returned to B will miss the TAX, and be wrong.

setPrice() should within a critical section (lock), to prevent any access to the object during the setting of the price

    lock(this)
    {
      price = value;
      price += TAX;
    }
ring0
+5  A: 

I think your question turns out not to be about thread-safety or immutablity but about the (very) low level details of memory access.

And that is a hefty subject but the short answer is: Yes, two threads (and more important, 2+ CPU's) can read (and/or write) the same piece of memory simultaneously.

And as long as the content of that memory area is immutable, all problems are solved. When it can change, there is a whole range of issues, the volatile keyword and the Interlocked class are some of the tools we use to solve those.

Henk Holterman
On a low level, as you say, I believe simultaneous access of memory does not happen. Processors cannot access the same memory at the same time, unless that memory happens to be cached. Otherwise, the memory bus must be used, which can be used by only one processor at the time. This stalls processors and limits the performance gain multiple processors could give (memory access is multiple times slower than the CPU). One remedy is the [NUMA concept or architecture](http://en.wikipedia.org/wiki/Non-Uniform_Memory_Access), where separate buses are used to access (still separate) memory locations:
Abel
@Abel, you're right but I called it 'the short story': As far as we (programmers) can tell, it is simultaneous. Even for assembly language programmers.
Henk Holterman
@Henk, I believe you are correct. But the ramification of memory read synchronization being handled transparently is, exactly, immutability ergo thread-safety. The actual answer seems to be that because **it is handled underneath**, and questioning whether it is handled wasn't so stupid after all. :)
randomguy
@random: I didn't say it was stupid, just slightly mismatched with the tile (-:
Henk Holterman
+2  A: 

Reading the same memory location can only be done in one CPU cycle by a specific thread. The order of read in this case does not matter since the underlying value does not change. Therefore there is no possibility of a read being inconsistent, so there is no need of synchronization at any level in this case.

venky
I think this also answers the question really well. Would you care to expand this answer?
randomguy
Actually, 2 CPU's could read the same address in the same clock-tick (and see different values). That's because they would each read from their own cache.
Henk Holterman
That is correct, but for the purposes of the question and simplification I had to assume a single CPU situation. However, the only part that changes in a multi-CPU situation is the possibility of memory caching, but the value is still the same.Even though the variable gram is declared static, there is no "setter" for it and it is initialized to the value of 1.0 once. Therefore it is immutable and does not need any program synchronization.
venky