views:

2913

answers:

11

I have no frame of reference in terms of what's considered "fast"; I'd always wondered this but have never found a straight answer...

+4  A: 

That's not really an answerable question - Amazon's going have orders-of-magnitude differences from a niche fluid dynamics site.

warren
+1  A: 

Probably around 0.001 give or take.

Igal Serban
+9  A: 

42 of course

keithwarren7
+2  A: 

almost none.

Eli
+1  A: 

There is no straight answer. Fast is a relative term, and the answer depends hugely on your context and application.

Dave L.
+2  A: 

That is a very open apples-to-oranges type of question.

You are asking 1. the average request load for a production application 2. what is considered fast

These don't neccessarily relate.

Your average # of requests per second is determined by

a. the number of simultaneous users

b. the average number of page requests they make per second

c. the number of additional requests (i.e. ajax calls, etc)

As to what is considered fast.. do you mean how few requests a site can take? Or if a piece of hardware is considered fast if it can process xyz # of requests per second?

DaveJustDave
+1  A: 
lkessler
What is your hardware?
dynback.com
Not sure. At that time I was at IXWebhosting and they were using a Windows 32-bit Operating System for their shared servers. I suspect their mySQL database server was a separate dedicated machine, but I don't know for sure.
lkessler
+7  A: 

OpenStreetMap seems to have 10-20 per second

Wikipedia seems to be 30000 to 70000 per second spread over 300 servers (100 to 200 requests per second per machine, most of which is caches)

Geograph is getting 7000 images per week (1 upload per 95 seconds)

OJW
+1  A: 

Note that hit-rate graphs will be sinusoidal patterns with 'peak hours' maybe 2x or 3x the rate that you get while users are sleeping. (Can be useful when you're scheduling the daily batch-processing stuff to happen on servers)

You can see the effect even on 'international' (multilingual, localised) sites like wikipedia

OJW
+1  A: 

less than 2 seconds per user usually - ie users that see slower responses than this think the system is slow.

Now you tell me how many users you have connected.

gbjbaanb
A: 

You can search "slashdot effect analysis" for graphs of what you would see if some aspect of the site suddenly became popular in the news, e.g. this graph on wiki.

Web-applications that survive tend to be the ones which can generate static pages instead of putting every request through a processing language.

There was an excellent video (I think it might have been on ted.com? I think it might have been by flickr web team? Does someone know the link?) with ideas on how to scale websites beyond the single server, e.g. how to allocate connections amongst the mix of read-only and read-write servers to get best effect for various types of users.

OJW