views:

46

answers:

3

I'm profiling a asp(classic) web service. The web service makes database calls, reads/writes to files, and processes xml. On a windows server 2003 box(2.7ghz, 4 core, 4gb ram) how many requests per second should I be able to handle before things start to fail.

I'm building a tool to test this, but I'm looking for a number of requests per second to shoot for.

I know this is fairly vague, but please give the best estimate you can. If you need more information, please ask.

+1  A: 

95% of the performance of any data-driven app is dependent on the database: 1) the way you do your calls, 2) the indexes, 3) the hardware under the database (disk subsystem in particular).

I have seen a machine, like you are describing, handle 40 requests per second (2500/minute), but numbers like 10 per second (600/minute) are more common. I would expect even lower if you are running your DB on the same machine, and even lower still if that DB is SQLExpress or MSAccess.

Also, at capacity, your app will probably not fail, but IIS will Queue requests, once it is saturated, and may timeout some of those requests if it can't service them before the timeout expires.

Btw, instead of building a tool to test your app, you may want to look into using a test tool such as Microsoft WCAT. It is pretty smooth and easy to use.

tgolisch
Database is on it's own server (farm actually), running sql server 2003. Not sure of the exact disk setup, it is using a SAN, I don't know much beyond that. I don't think WCAT will work out for me, to use my web service there is an authentication process that would be hard to handle without code. I also am trying to do specific requests that rely on the results of other requests. I'm using a profile based on the services activity.
aepheus
A: 

How fast should it be? Fast enough.

How fast is fast enough? That's a question that only you and your users can answer. If your service is horrifically inefficient and keeps up with demand, it's fast enough. If your service is assembly-optimized, lightning-fast, and overwhelmed with requests, it's not fast enough.

If the server is handling its actual workload, then don't worry about how fast it "should" be. When the server is having trouble, or when you anticipate that it soon will, then you should look at improving the code or upgrading the hardware. Remember Knuth's Law – premature optimization is the root of all evil. Any work you do now to make it faster may never pay off, and you may be forced to make compromises with flexivility or maintainability. Remember, too, an older adage – if it ain't broke, don't fix it.

Thom Smith
Thom is right on the money. Speed = $$$. $, and time (of course time = $). So $ are usually your biggest constraints to speed. And, if you make it way faster than it needs to be, you may have over-spent.
tgolisch
If you are asked if your service will be able to handle a certain load, and keeping your word is important, you better be sure you can handle that load. This means finding the load you can handle. I was not asking about optimization at all.
aepheus
A: 

Yes I would also say 10 per second is a good benchmark. For a high performance app you would want to get more than this, but if you have no specific goal you should generally be able to get at least 10 requests per sec for a general web page with a bunch of database queries.

mike nelson