views:

440

answers:

8

I've got a C# service that currently runs single-instance on a PC. I'd like to split this component so that it runs on multiple PCs. Each PC should be assigned a certain part of the work. If one PC fails, its work should be moved to a backup machine.

Data synchronization can be done by the DB, so that should not be much of an issue. My current idea is to use some kind of load balancer that splits and sends the incoming requests to the array of PCs and makes sure the work is actually processed.

How would I implement such a functionality? I'm not sure if I'm asking the right question. If my understanding of how this goal should be achieved is wrong, please give me a hint.

Edit:

  1. I wonder if the idea given above (load balancer splitswork packages to PCs and checks for result) is feasible at all. If there is some kind of already implemented solution so this seemingly common problem, I'd love to use that solution.

  2. Availability is a critical requirement.

A: 

How about using a server and multi-threading your processing? Or even multi-threading on a PC as you can get many cores on a standard desktop now.

This obviously doesn't deal with the machine going down, but could give you much more performance for less investment.

ck
Yes, it's a good idea for a single PC, but availability would still be required.
mafutrct
A: 

you can check windows clustering, and you have to handle set of issues that depends on the behaviour of the service (you can put more details about the service itself so I can answer)

Ahmed Said
+1  A: 

From what you said each PC will require a full copy of your service -

Each PC should be assigned a certain part of the work. If one PC fails, its work should be moved to a backup machine

Otherwise you won't be able to move its work to another PC.

I would be tempted to have a central server which farms out work to individual PCs. This means that you would need some form of communication between each machine and and keep a record back on the central server of what work has been assigned where.

You'll also need each machine to measure it's cpu loading and reject work if it is too busy.

A multi-threaded approach to the service would make good use of those multiple processor cores that are ubiquitoius nowadays.

ChrisBD
A: 

This depends on how you wanted to split your workload, this usually done by

  • Splitting the same workload by multiple services

    Means same service being installed on different servers and will do the same job. Assume your service is reading huge data from the db servers and processing them to produce huge client specific datafiles and finally this datafile is been sent to the clients. In this approach all your services installed in diff servers will do the same work but they split the work to increaese the performance.

  • Splitting the part of the workload by multiple services

    In this approach each service will be assigned to the indivitual jobs and works on different goals. in above example one serivce is responsible for reading data from db and generating huge data files and another service is configured only to read the data file and send it to clients.

I have implemented the 2nd approach in one of my work. Because this let me isolate and debug the errors in case of any failures.

Cheers

Ramesh Vel

Ramesh Vel
A: 

The usual approach for load balancer is to split service requests evenly between all service instances.

For each work item (request) you can store relative information in database. Then each service should also have at least one background thread checking database for abandoned work items.

Vitaliy Liptchinsky
A: 

I would suggest that you publish your service through WCF (Windows Communication Foundation).

Then implement a "central" client application which can keep track of available providers of your service and dish out work. The central app will act as scheduler and load balancer of the tasks to be performed.

Check out Juwal Lövy's book on WCF ("Programming WCF Services") for a good introduction on this topic.

d91-jal
A: 

You can have a look at NGrid : http://ngrid.sourceforge.net/

or Alchemi : http://www.gridbus.org/~alchemi/index.html

both are grid computing framework with load balancers that will get you started in no time.

Cheers, Florian

Florian Doyon
+4  A: 

I'd recommend looking at a Pull model of load-sharing, rather than a Push model. When pushing work, the coordinating server(s)/load-balancer must be aware of all the servers that are currently running in your system so that it knows where to forward requests; this must either be set in config or dynamically set (such as in the Publisher-Subscriber model), then constantly checked to detect if any servers have gone offline. Whilst it's entirely feasible, it can complicate the scaling-out of your application.

With a Pull architecture, you have a central work queue (hosted in MSMQ, Sql Server Service Broker or similar) and each processing service pulls work off that queue. Expose a WCF service to accept external requests and place work onto the queue, safe in the knowledge that some server will do the work, even though you don't know exactly which one. This has the added benefits that each server monitors it's own workload and picks up work as-and-when it is ready, and you can easily add or remove servers to/from this model without any change in config.

This architecture is supported by NServiceBus and the communication between Windows Azure Web & Worker roles.

FacticiusVir