views:

48

answers:

4

I have a class, ProcessorClass. It has two methods Start() and Stop(). When you hit start, it internally creates a bunch of threads that do some processing and, if they find something relevant, fire an event.

My form frmMain (this is a windows forms app .net 3.5) has an instance of this class and is catching these events. Each time an event is caught, something gets inserted into the database. Am I right in concluding that this strategy will ensure synchronous inserts? I can't have multiple threads trying to do data operations as that would cause me to lose data.

+1  A: 

This strategy will not ensure that it all happens on one thread. The events are handled on the thread that fires them. You should use a static lockobject and lock on that (lock (lockobject) { ... })

P.S. Why would multiple threads inserting into a DB cause data loss? What database?

Kirk Woll
Records are pulled, processed, and inserted or updated. I will try putting a lock inside the event handler but there's some processing that goes on so I need to figure out an intelligent way to do it without losing too much performance.
Ready Cent
Have you tried doing the work in a transaction?
Steven Sudit
If you lock in code, you are effectively serializing all your database operations.
Cylon Cat
+1  A: 

Place a lock around the section of code that you want to be mutually exclusive.

public void InsertIntoDataBase()
{
   lock(

   // Insert something into database.

   )

}

Any requests to the InsertIntoDataBase function will be queued until the lock is removed and will execute the code upon the order that they arrived.

George
+1  A: 

No, these won't be synchronous inserts, especially on multicore machines. Rather than going back to single-threading, or locking on some object in memory, I'd suggest putting concurrency handling in your database updates. You can do that without locks in the database by using a WHERE clause to check old values on the same statement that does the update. If the result has zero rows inserted, then the concurrency check saved you from data corruption.

However, if you don't have a retry mechanism that works for your business rules, you'll need to add concurrency specifications to your database transactions. (i.e., open a transaction, attach it to your command, and set the transaction concurrency level to the level of safety that you need).

Cylon Cat
Are you suggesting some form of optimistic locks on the database?
Steven Sudit
Optimistic concurrency works, if you have a recovery or retry path for a failed update. Otherwise, go with pessimistic. Not knowing the database, I don't know the specifics, but in SQL Server, you'd set the IsolationLevel property on the transaction. The granularity of locking determines how much real concurrency you can get. On the other hand, locking in memory means no concurrency at all, so going with database concurrency checking may buy some performance improvement.
Cylon Cat
+1  A: 

Assuming that these events are fired synchronously, the handlers will fire on the producer threads. Whether or not this causes race conditions depends on the underlying database.

If you must ensure that parallel inserts do not occur, you can synchronize with the lock { } keyword.

I am not sure why the handler for the database inserts resides in the form subclass, but given that it does, BeginInvoke (i.e. Control.BeginInvoke) could also be used for synchronization. This is bad practice, however.

Ani
It's because the Processor class is used in several places and when it is created in this form, there is additional processing initiated. The additional process is taken care of in a different object. I can't lock the entire process because that's too inefficient.
Ready Cent