views:

103

answers:

3

I have an application that performs a write to a database each time a page is requested. This db write is not time critical, so for performance reasons, I have the page queuing the object using MSMQ. Of course, I have to have something to read the queue and process requests, so I wrote a class that looks similar to this:

public class QueueProcessor {
    public void Start() 
    {
        // Create background thread to run ProcessInternal()
    }

    public void Stop() 
    {
        // Stop the previously created thread
    }

    public void ProcessInternal() 
    {
        while (runnable) 
        {
            // check queue for messages, process them one at a time, 
            // then wait indefinitely for more messages
        }
    } 
}

Now, since this web site is fairly small in nature, I realy don't want to have to add a windows service to the deployment routine, so what I'm doing at this time is creating a new QueueProcessor in the Application_Start event and starting it there. My thought is that the thread will run for the life of the application, and if the application gets killed by IIS, it will merely start up again the next time someone accesses a page.

To prevent long periods of idle where queued messages are not processed because the application has been killed by IIS, I've set up a wget request to execute every few minutes on one of the site's pages, thus keeping the application alive and ensuring the background thread will keep running.

My question is, I don't see this design very often, and I'm wondering if there are any potential problems with it?

EDIT:

After doing some reading on the subject, I found that the major problem with this approach (and similar ones, such as the cache removal callback function) is the type of function they were being asked to perform with regards to scalability. For example, if you're doing a mass update or something, you obviously don't want the task running on all of your web servers - you'll have conflicting updates and potential data loss that way.

However, in my very particular case, the thread is processing ONLY requests received by that server that are sitting in that server's message queue. Therefore, if I were to scale this app to say 5 servers, each thread would process each server's queued messages without any problems. Again, the order of the records in the database isn't so important that a few similar requests from different servers within a small interval would be a problem, so I still think this solution is reasonable for my problem.

+1  A: 

I believe setting up the wget request is not much easier than deploying a Windows service. Other than that, I don't see any specific problems with it. If reliability and accurate timing isn't critical, this would work.

Mehrdad Afshari
The wget piece is extra, and not really required. As mentioned by another poster, as long as I catch/handle all exceptions on that thread, there's really no chance of it ever dying prematurely, and since idle application kills mean no queued traffic, I'm really OK there too.
Chris
+1  A: 

I have used a similar scheme on some websites I've maintained without difficulty. The main thing to look out for is to make sure you properly catch all exceptions in the thread. What you don't want to have is the thread to die without you knowing and then silently restart again when your application recycles.

Keltex
+1  A: 

While I wouldn't have thought it was a good idea, Jeff and Co. seem to use similar hackery to avoid a Windows Service on this very website (and Joel does too for FogBugz). Since the scale and performance are pretty well proven here - it certainly does seem reasonable and workable.

Mark Brackett
Actually they don't use this trick anymore, read the last comment of Jeff (in response to Joe's question, at the bottom of the page)
Waleed Eissa