views:

426

answers:

6

I have an object that requires a lot of initialization (1-2 seconds on a beefy machine). Though once it is initialized it only takes about 20 miliseconds to do a typical "job"

In order to prevent it from being re-initialized every time an app wants to use it (which could be 50 times a second or not at all for minutes in typical usage), I decided to give it a job que, and have it run on its own thread, checking to see if there is any work for it in the que. However I'm not entirely sure how to make a thread that runs indefinetly with or without work.

Here's what I have so far, any critique is welcomed

    private void DoWork()
    {
        while (true)
        {
            if (JobQue.Count > 0)
            {
                // do work on JobQue.Dequeue()
            }
            else
            {
                System.Threading.Thread.Sleep(50);
            }
        }
    }

After thought: I was thinking I may need to kill this thread gracefully insead of letting it run forever, so I think I will add a Job type that tells the thread to end. Any thoughts on how to end a thread like this also appreciated.

+16  A: 

You need to lock anyway, so you can Wait and Pulse:

while(true) {
    SomeType item;
    lock(queue) {
        while(queue.Count == 0) {
            Monitor.Wait(queue); // releases lock, waits for a Pulse,
                                 // and re-acquires the lock
        }
        item = queue.Dequeue(); // we have the lock, and there's data
    }
    // process item **outside** of the lock
}

with add like:

lock(queue) {
    queue.Enqueue(item);
    // if the queue was empty, the worker may be waiting - wake it up
    if(queue.Count == 1) { Monitor.PulseAll(queue); }
}

You might also want to look at this question, which limits the size of the queue (blocking if it is too full).

Marc Gravell
Classic queue, lock and pulsing.
scope_creep
+1 very nice...
Stan R.
Why write something that's already been written? Pfx, baby.
Will
@Will: AFAIK, the only download for pfx is a CTP - i.e. **not fully supported**. In 4.0 (currently beta), then yes: the TPL etc will be a given - but we still get a **lot** of 2.0 questions here...
Marc Gravell
Mark, can you expand a little on whats going on with Monitor.Wait? This looks very interesting
Neil N
@Neil - true, true; I retained that from the original question, but yes - a stop/drain would be nice.
Marc Gravell
To `Wait`, you must be holding the lock; when you call `Wait`, you *release* the lock and your thread is placed on the pending queue (not the ready queue - so you don't come back to life even if the lock is available). *Another* thread will (at some point) obtain the lock, and call `Pulse`/`PulseAll`; this moves thread(s) from the pending queue into the ready queue, so that when they release the lock your thread comes *back* into life, obtains the lock (again), and continues. Make sense?
Marc Gravell
@Neil - I also covered more of the exit condition stuff on the other question (linked).
Marc Gravell
Good answer however I understand that Pulse and PulseAll were considered a mistake. Specifically you can miss a Pulse if you do one during a Kernel Dispatcher thread switch.
Jan Bannister
+2  A: 

You need a synchronization primitive, like a WaitHandle (look at the static methods) . This way you can 'signal' the worker thread that there is work. It checks the queue and keeps on working until the queue is empty, at which time it waits for the mutex to signal it again.

Make one of the job items be a quit command too, so that you can signal the worker thread when it's time to exit the thread

Pieter Breed
+1  A: 

In most cases, I've done this quite similar to how you've set up -- but not in the same language. I had the advantage of working with a data structure (in Python) which will block the thread until an item is put into the queue, negating the need for the sleep call.

If .NET provides a class like that, I'd look into using it. A thread blocking is much better than a thread spinning on sleep calls.

The job you can pass could be as simple as a "null"; if the code receives a null, it knows it's time to break out of the while and go home.

Jed Smith
+1  A: 

Grab the Parallel Framework. It has a BlockingCollection<T> which you can use as a job queue. How you'd use it is:

  1. Create the BlockingCollection<T> that will hold your tasks/jobs.
  2. Create some Threads which have a never-ending loop (while(true){ // get job off the queue)
  3. Set the threads going
  4. Add jobs to the collection when they come available

The threads will be blocked until an item appears in the collection. Whoever's turn it is will get it (depends on the CPU). I'm using this now and it works great.

It also has the advantage of relying on MS to write that particularly nasty bit of code where multiple threads access the same resource. And whenever you can get somebody else to write that you should go for it. Assuming, of course, they have more technical/testing resources and combined experience than you.

Will
Do you mean the CTP marked "This CTP is for testing purposes only."? Not sure that is a great recommendation... in 4.0, fine - but that is beta!
Marc Gravell
+1  A: 

If you don't really need to have the thread exit (and just want it to keep from keeping your application running) you can set Thread.IsBackground to true and it will end when all non background threads end. Will and Marc both have good solutions for handling the queue.

Dolphin
+1  A: 

I've implemented a background-task queue without using any kind of while loop, or pulsing, or waiting, or, indeed, touching Thread objects at all. And it seems to work. (By which I mean it's been in production environments handling thousands of tasks a day for the last 18 months without any unexpected behavior.) It's a class with two significant properties, a Queue<Task> and a BackgroundWorker. There are three significant methods, abbreviated here:

private void BackgroundWorker_DoWork(object sender, DoWorkEventArgs e)
{
   if (TaskQueue.Count > 0)
   {
      TaskQueue[0].Execute();
   }
}

private void BackgroundWorker_RunWorkerCompleted(object sender, RunWorkerCompletedEventArgs e)
{
    Task t = TaskQueue[0];

    lock (TaskQueue)
    {
        TaskQueue.Remove(t);
    }
    if (TaskQueue.Count > 0 && !BackgroundWorker.IsBusy)
    {
        BackgroundWorker.RunWorkerAsync();
    }
}

public void Enqueue(Task t)
{
   lock (TaskQueue)
   {
      TaskQueue.Add(t);
   }
   if (!BackgroundWorker.IsBusy)
   {
      BackgroundWorker.RunWorkerAsync();
   }
}

It's not that there's no waiting and pulsing. But that all happens inside the BackgroundWorker. This just wakes up whenever a task is dropped in the queue, runs until the queue is empty, and then goes back to sleep.

I am far from an expert on threading. Is there a reason to mess around with System.Threading for a problem like this if using a BackgroundWorker will do?

Robert Rossney
In general, I agree that the BackgroundWorker class is very often a simpler choice for managing a background task. In this case, I have to challenge some of your claims. You are using threading objects: the `lock` keyword is syntactic sugar for `System.Threading.Monitor.Enter()` and `System.Threading.Monitor.Exit()`. Instead of a while loop, your RunWorkerCompleted event handler calls `BackgroundWorker.RunWorkerAsync()`. I think the logic of a wait/pulse while loop might be easier to follow in this case.
Don Kirkby
Fair enough. It probably just looks simpler to me because I'm not experienced with wait/pulse.
Robert Rossney