views:

88

answers:

2

Our server web app will handle jobs that are requested by REST API requests.

Ideally if the server dies during a job (ie: plug pulled), the job should resume or restart at startup.

A very convenient way to process these jobs is in a separate thread using some of the concurrent utility classes in Java 5. The only issue is, given a failure, you need to have written down the job details and create a process that reads these details at startup and resumes the jobs. This seems like a pain to do.

An alternate approach is to use a queue where user makes request, we write to queue, then read from queue and perform job and only remove the message when the job is complete. This makes it easy to resume the job on startup as the server will just read from the queue on startup and resume the process.

Are there any better approaches to this scenario?

+1  A: 

I'd use Quartz (which has fail-over capabilities) to manage your jobs.

PS: I'd prefer to be wrong but, having read your last questions, I have the feeling that you are building something overcomplicated or conceptually wrong. There are just too many architecture smells IMHO.

Pascal Thivent
+1  A: 

Given that you've specified REST, you obviously have clients that make requests and require results. Why not put the onus of determining if they've completed on the clients themselves.

e.g. a client makes a request. If it gets a result back, all well and good. If, however, the client detects the server has gone done (via a premature disconnection on the HTTP connection), then it can back off and retry later. If you wish, you can implement various retry strategies (e.g. retry on a different host, give up after 'n' retries etc.).

This way the clients maintain knowledge of what they require (as they must do anyway, presumably) and your servers are stateless, which is a lot easier to manage.

Brian Agnew
Agree this is a lot easier to manage. Going to try this..
Marcus