views:

30

answers:

1

I'm running some load tests from Visual Studio on a WCF service and I would like some help in trying to interpret / analyze the results.

After enabling counters in web.config the host has provided us data for the following counters: "Calls Duration" and "Calls Per Second".

I've assumed that "Calls Duration" is the figure I need to analyse as "Test Time" (inside Visual Studio) is (implicitly) dependant upon latency of the call over the internet. The sampling rate of the data provided from the host is per second.

  • What is the relationship between the load (the number of users) and the value for Calls Duration? For example, if I have a constant load pattern of 10 users and a corresponding value for "Calls Duration" of 0.037 does this mean that this is the average time to process each call?
  • Is there an "accepted" or "standard" maximum value for "Calls Duration"?
  • Is "Calls Per Second" a value for throughput? For example, if the value is "0.9862" what does this tell me?

The objective of the tests is to find the limit of the service, i.e. it will support XXX users.

All help is greatly appreciated.

Thanks,

Jose

A: 

If you have a very high volume web site you can hit the limit of concurrent connections.

If the call is very short, it holds the connection for less time and you can therefore process more requests.

If the calls are very long, you will run out of connections, and will start to get a queue. If the queue gets long enough users will start getting server too busy errors.

The length of the call depends on what you are doing, if you are just returning a published page it should be short, if you are taking an order and writing to the database it will take longer.

Shiraz Bhaiji