views:

712

answers:

5

Sorry if I am asking same question again but want to verify!

I have two processes P1 and P2.

P1 is a writer (Producer).
P2 is a reader (Consumer).

There is some shared memory or a file that P1 writes to and as soon as P1 writes, P2 should be notified for reading.

Now as per my understanding pseudocode for P1 should be

Open shared file
Create a named event("Writedone") to signal write for P2 process
Do some processing on file
Mutex.Lock()
Write to File
Mutex.Unlock()
Signal named Event.
CloseHandle on file

Now in P2

Open handle to Shared file
Open handle to named event
WaitForSingleEvent on named event("Writedone")
Read from file
CloseHandle on file

Questions:

  1. Is it required to have locks in the reader? The reader will just read the file and not change it. So I guess no locks are required in the reader. Thoughts? Can it go wrong in some case without locks?
  2. I am opening and closing handles to the file every time during reading and writing. I think that is not required. I can open the file handle in the constructor and close it in the destructor of reader and writer. But can I read from the file when it is being used in writing?

EDIT: Everytime writer is writing 10 bytes at the end of File and reader is supposed to read the latest 10 bytes written by writer.

A: 

You need the reader to get a lock - the use of events is no substitute. Without it, a writer could begin writing at any point in the reader code.

anon
A: 

You absolutely need the Consumer to lock in order to prevent the Producer from appending to the file before the reader can read. Imagine this scenario:

Producer writes and signals
Consumer receives signal
Consumer opens the file
Producer fires again and writes another 10 bytes
Producer signals
Consumer reads the last 10 bytes
Consumer closes the file

What happens next depends on if your named event is manual reset or auto reset. If it's auto reset, then the Consumer will see the second signal, and go back and read the same thing again. If it's manual reset, then the Consumer is going to reset the event and miss the last thing that the Producer wrote.

Note that even with the lock you have a race condition if the Producer can respond quickly enough. That is, the Producer might be able to put a second record into the file before the Consumer is able to read the first.

It appears that what you have here is a FIFO queue implemented in a file, and you're depending on the Consumer's ability to process data faster than the Producer can create it. If you can guarantee that behavior, then you're okay. Otherwise the Consumer will have to keep track of where it last read so that it knows where it should read next.

Jim Mischel
A: 
  1. You do need to lock the mutex in the reader, if the writer can start writing at any time. Make sure the mutex is a named one so P2 can open it.

  2. If you open the file with FileShare.ReadWrite in both processes, you can leave it open.

In the reader, you may have to Seek to the place you hit EOF before you can read again.

If you are sure the writer is always appending and you can tell where a record ends (because they are always 10 bytes for example), and you can accept a small delay, and the writer always writes complete records, you can do this without mutexes and events at all. Open the file with FileShare.ReadWrite, and in the reader, keep seeking to the same place and trying to read your record, and sleep for a second if you couldn't. If you manage to read a whole record, you got one. Figure out your position and loop back to seek to that place and try to read again. This is kind of how tail -f works in Unix.

Carlos A. Ibarra
+1  A: 

The answer is: locking is necessary if (and only if) both threads can use the same shared resources at the same time. There isn't enough information about your specific implementation, but I have few remarks:

  1. Locking only during writing makes no sense. It only adds some overhead, but not prevent from any concurrent access until the reader is also correctly locked.
  2. Locking would be necessary if file operations which modify structures connected with the file's descriptors are not synchronized in any way. It may happen that P1 could start writing to the file when P2 is still reading. If reading and writing operations modify the same system structures without any underlying synchronization you will end up with corrupted data. It's hard to say if this is the case here because you didn't mention which particular function (libraries) you used. File operations are synchronized on most systems, so it shouldn't be a problem.
  3. From what you wrote about "10 bytes portions of information", the explicit locking seems to be not necessary (unless #2 doesn't impose it). P1 produces quantum of data. When the data is ready to be read P1 notifies P2 about that (by the event; event passing should be internally synchronized, anyhow). P2 knows that it could read quantum of data and then needs to wait for subsequent notification. It may happen that subsequent notification would be sent before previous one is handled. So, the events needs to be queued somehow. You can also use semaphore instead of events notification.
oo_olo_oo
A: 

Apart from normal synchronization functions, you can use the file change notification API on Windows to wait for file changes.

jpalecek