views:

1264

answers:

4

It there any article/book that defines upper bounded design limits for WS timeouts? Do you timeout at the server or recommend the client specific timeouts too?

Is there a common best practice like "never design WS that can take longer than 60 seconds, use an asynchronous token pattern"

I am interested in knowing what you do or your opinion too.

A: 

Take the amount of data you are transfering via your web service an see how long the process takes.

Add 60 secs to that number and test.

If you can get it to timeout on a good connection then add 30 more seconds.

rinse and repeat.

mugafuga
+2  A: 

This question, and the ones linked to in answers to it, might help: http://stackoverflow.com/questions/184814/is-there-some-industry-standard-for-unacceptable-webapp-response-time

Somewhat tangential to your question (no time intervals, sorry), but I suspect useful for your work: A common approach to timeouts is to balance them with "back-off" timers.
It goes something like this: The first time a service times out, don't worry about it. The second time in a row a service times out, don't bother calling it for N seconds. The third time in a row a service times out, don't call it for N+1 seconds. Then N+2, N+3, N+5, N+8, etc, until you reach some maximum limit M.

The timeout counter is reset when you get a valid response.

I am using a Fibbonacci sequence to increase the "back-off" time period here, but of course you can use any other suitable function -- the point being, if the service you are trying keeps timing you, you "belief" in it get smaller and smaller, so you spend fewer resources trying to get to it, and knock on the door more rarely. This might help the service on the other end, which could simply be overloaded and re-requesting just makes matters worse, and it will increase your response time since you won't be waiting around for a service that is unlikely to answer.

SquareCog
A: 

We generally take the expected response time for that web-service (as documented in our interface specification) and add 30 seconds to it.

Then we monitor the logs during UAT to see if there are any patterns (e.g. specific DB calls taking a long time) and alter as appropriate.

nzpcmad
A: 

This stuff about 30+ second timeouts is ridiculous advice, IMO. Your timeouts should be about 3 seconds. Yes. Three. The number after two and before four. If you're building an application based on a SOA, then DEFINITELY 3 seconds, or less.

Think about it... the user of your app expects a TOTAL response time of about five seconds or less (preferably about three). If EACH INDIVIDUAL SERVICE CALL is taking more than a couple of* milliseconds* to return, you're hosed. Waiting 30+ seconds for ONE service to return is an eternity. The user will never wait around that long. Plus, if you know they're supposed to return in the sub-one second range, what's the point of waiting for another 30 or more seconds to signal an error condition; it's not going to magically work where when it didn't 28 seconds ago. If your application has wild swings in average response time from sub-one second to over 30 seconds, something was designed incorrectly. You might think about some caching or something.

Robert C. Barth
It can be a service between application servers, not really end user related.
MMind