views:

219

answers:

9

I have a GUI C# application that has a single button Start/Stop.

Originally this GUI was creating a single instance of a class that queries a database and performs some actions if there are results and gets a single "task" at a time from the database.

I was then asked to try to utilize all the computing power on some of the 8 core systems. Using the number of processors I figure I can create that number of instances of my class and run them all and come pretty close to using a fair ammount of the computing power.

Environment.ProccessorCount;

Using this value, in the GUI form, I have been trying to go through a loop ProccessorCount number of times and start a new thread that calls a "doWork" type method in the class. Then Sleep for 1 second (to ensure the initial query gets through) and then proceed to the next part of the loop.

I kept on having issues with this however because it seemed to wait until the loop was completed to start the queries leading to a collision of some sort (getting the same value from the MySQL database).

In the main form, once it starts the "workers" it then changes the button text to STOP and if the button is hit again, it should execute on each "worker" a "stopWork" method.

Does what I am trying to accomplish make sense? Is there a better way to do this (that doesn't involve restructuring the worker class)?

+1  A: 

I would assume that the easiest way to increase your use of the processors would be to simply spawn the worker methods on threads from the ThreadPool (by calling ThreadPool.QueueUserWorkItem). If you do this in a loop, the runtime will pick up threads from the thread pool and run the worker threads in parallel.

ThreadPool.QueueUserWorkItem(state => DoWork());
Fredrik Mörk
+1  A: 

Never use Sleep for thread synchronization.

Your question doesn't supply enough detail, but you might want to use a ManualResetEvent to make the workers wait for the initial query.

SLaks
+1  A: 

Yes, it makes sense what you are trying to do.

It would make sense to make 8 workers, each consuming tasks from a queue. You should take care to synchronize threads properly, if they need to access shared state. From your description of your problem, it sounds like you are having a thread synchronization problem.

You should remember, that you can only update the GUI from the GUI thread. That might also be the source of your problems.

There is really no way to tell, what exactly the problem is, without more information or a code example.

driis
Yes, sounds like thread safety/synch is the key here...
GalacticCowboy
+2  A: 

Restructure your design so you have one thread running in the background checking your database for work to do.

When it finds work to do, spawn a new thread for each work item.

Don't forget to use synchronization tools, such as semaphores and mutexes, for the key limited resources. Fine tuning the synchronization is worth your time.

You could also experiment with the maximum number of worker threads - my guess is that it would be a few over your current number of processors.

Robert Venables
As it looks now, this seems to be the best solution for me.I think restructuring a little will also solve some of my other issues of shared data handling at the same time.The final design I will be attempting is to have it query for a single task at a time, and claim one (if any) and pass it into the worker class once all the shared data is taken care of. From there, there should be no issues with colliding writes/reads...etc
TaRDy
A: 

I'm suspecting you have a problem like this: You need to make a copy of the loop variable (task) into currenttask, otherwise the threads all actually share the same variable.

<main thread>
var tasks = db.GetTasks();
foreach(var task in tasks) {
   var currenttask = task; 
   ThreadPool.QueueUserWorkItem(state => DoTask(currenttask));
   // or, new Thread(() => DoTask(currentTask)).Start()
   // ThreadPool.QueueUserWorkItem(state => DoTask(task)); this doesn't work!
}

Note that you shouldn't Thread.Sleep() on the main thread to wait for the worker threads to finish. if using the threadpool, you can continue to queue work items, if you want to wait for the executing tasks to finish, you should use something like an AutoResetEvent to wait for the threads to finish.

Jimmy
+2  A: 

While an exhaustive answer on the best practices of multithreaded development is a little beyond what I can write here, a couple of things:

  1. Don't use Sleep() to wait for something to continue unless ABSOLUTELY necessary. If you have another code process that you need to wait for completion, you can either Join() that thread or use either a ManualResetEvent or AutoResetEvent. There is a lot of information on MSDN about their usage. Take some time to read over it.
  2. You can't really guarantee that your threads will each run on their own core. While it's entirely likely that the OS thread scheduler will do this, just be aware that it isn't guaranteed.
Adam Robinson
really, use Join with the time parameter, i.e. thread.Join(1000);
Bogdan_Ch
@Bogdan: That's certainly an option, but hardly the only or the "best". For one, you're assuming that a timeout is appropriate. For another, you're assuming that you don't want to continue until the other thread finishes. A signal object like a ManualResetEvent or AutoResetEvent is another equally appropriate approach depending on what you're doing.
Adam Robinson
A: 

You seem to be encountering a common issue with multithreaded programming. It's called a Race Condition, and you'd do well to do some research on this and other multithreading issues before proceeding too far. It's very easy to quickly mess up all your data.

The short of it is that you must ensure all your commands to your database (eg: Get an available task) are performed within the scope of a single transaction.

I don't know MySQL Well enough to give a complete answer, however a very basic example for T-SQL might look like this:

BEGIN TRAN 
DECLARE @taskid int 
SELECT @taskid=taskid FROM tasks WHERE assigned = false
UPDATE tasks SET assigned=true WHERE taskid = @taskID 
SELECT * from tasks where taskid = @taskid 
COMMIT TRAN

MySQL 5 and above has support for transactions too.

Will Hughes
A: 

You could also do a lock around the "fetch task from DB" code, that way only one thread will query the database at a time - but obviously this decrease the performance gain somewhat.

Some code of what you're doing (and maybe some SQL, this really depends) would be a huge help.

However assuming you're fetching a task from DB, and these tasks require some time in C#, you likely want something like this:

object myLock;

void StartWorking()
{
    myLock = new object(); // only new it once, could be done in your constructor too.
    for (int i = 0; i < Environment.Processorcount; i++)
    {
     ThreadPool.QueueUserWorkItem(null => DoWork());
    }
}

void DoWork(object state)
{
    object task;
    lock(myLock)
    {
     task = GetTaskFromDB();
    }

    PerformTask(task);
}
Steffen
A: 

There are some good ideas posted above. One of the things that we ran into is that we not only wanted a multi-processor capable application but a multi-server capable application as well. Depending upon your application we use a queue that gets wrapped in a lock through a common web server (causing others to be blocked) while we get the next thing to be processed.

In our case, we are processing lots of data, we to keep things single, we locked an object, get the id of the next unprocessed item, flag it as being processed, unlock the object, hand the record id to be processed back to the main thread on the calling server, and then it gets processed. This seems to work well for us since the time it takes to lock, get, update, and release is very small, and while blocking does occur, we never run into a deadlock situation while waiting for reasources (because we are using lock(object) { } and a nice tight try catch inside to ensure we handle errors gracefully inside.

As mentioned elsewhere, all of this is handled in the primary thread. Given the information to be processed, we push it to a new thread (which for us goes and retrieve 100mb's of data and processes it per call). This approach has allowed us to scale beyond the single server. In the past we had to through high end hardware at the problem, now we can throw several cheaper, but still very capable servers. We can also through this across our virtualization farm in low utilization periods.

On other thing I failed to mention, we also use locking mutexes inside our stored proc as well so if two apps on two servers call it at the same time, it's handled gracefully. So the concept above applies to our app and to the database. Our clients backend is MySql 5.1 series and it is done with just a few lines.

One of this things that I think people forget when they are developing is that you want to get in and out of the lock relatively quickly. If you want to return large chunks of data, I personally wouldn't do it in the lock itself unless you really had to. Otherwise, you can't really do much mutlithreading stuff if everyone is waiting to get data.

Okay, found my MySql code for doing just what you will need.

 DELIMITER //

 CREATE PROCEDURE getnextid(
  I_service_entity_id INT(11)
   , OUT O_tag VARCHAR(36)
 )

 BEGIN
   DECLARE L_tag VARCHAR(36) DEFAULT '00000000-0000-0000-0000-000000000000';
   DECLARE L_locked INT DEFAULT 0;

   DECLARE C_next CURSOR FOR
  SELECT tag FROM workitems
  WHERE status in (0)
    AND processable_date <= DATE_ADD(NOW(), INTERVAL 5 MINUTE)
  ;

   DECLARE EXIT HANDLER FOR NOT FOUND
   BEGIN
  SET L_tag := '00000000-0000-0000-0000-000000000000';
  DO RELEASE_LOCK('myuniquelockis');
   END;

   SELECT COALESCE(GET_LOCK('myuniquelockis',20), 0) INTO L_locked;
   IF L_locked > 0 THEN
  OPEN C_next;
  FETCH C_next INTO I_tag;
  IF I_tag <> '00000000-0000-0000-0000-000000000000' THEN
    UPDATE workitems SET
     status = 1
   , service_entity_id = I_service_entity_id
   , date_locked = NOW()
    WHERE tag = I_tag;

  END IF;
  CLOSE C_next;
  DO RELEASE_LOCK('myuniquelockis');
   ELSE
  SET I_tag := L_tag;
   END IF;
 END
 //

 DELIMITER ;

In our case, we return a GUID to C# as an out parameter. You could replace the SET at the end with SELECT L_tag; and be done with it and loose the OUT parameter, but we call this from another wrapper...

Hope this helps.