Our rule is that if a page takes longer than 1 second to render then we have a serious problem. Now, to be clear, I'm talking about when the client has DSL or better. Typically, our page times are in the 150ms to 200ms range.
Proper coding with the right amount of hardware should always result in a site that performs well.
You should note that there are a ton of things might interfere. Network conditions is a big one. If the client's network is dog slow or not provisioned correctly (meaning they have 100 people sharing a T1) then there isn't much you can do. However, you do have control over your own code and typically your side of the network equation.
UPDATE for rockinthesixstring
Things we do to make our web apps scream.
We do NOT use any ORM products. Yes they will make your development go faster; but there is not an ORM out there that is better at tweaking SQL than we are. Even LINQ to SQL requires you to know a lot about SQL server in order to use it properly. From our perspective it's just not worth it.
We do NOT use embedded SQL anywhere. All coding is done through s'procs. Besides adding another layer of security, we can very easily tweak the sql calls in flight without impacting the underlying code. For example, one guy here had a page that started off pretty fast. It was just paging and sorting through some records. However, when we tested it against 100k records (paging 20 at a time) it was taking close to 4 seconds to load each page. Some tweaks to the s'proc and it was back down to 250ms. Without redeploying the site.
We do NOT use drag / drop page coding. All of our devs know and understand the things that vastly improve browser rendering performance. Such as using table-layout:fixed; Basically we solve the math problems ahead of time. They are fluent in css and know the difference from one DOCTYPE to the next.
We do NOT use Session. Most of our apps are load balanced and using session would require an extra 2 database calls per page (save / retrieve). I've yet to run into a situation where that is necessary.
We DO use css and javascript compressors. Every byte counts at large scales (thousands of users or more).
We DO follow the KISS rules. For example, unless there is a very damn good reason we do NOT use web services; instead we go the REST route for any ajax. And I've never seen a good reason for WCF. Most developers I know who have used it end up gutting most of the "security" features just to get it to work reliably.
We take the time to tweak IIS for performance. Little things like making sure pages are properly compressed. Also images, style sheets, and javascript are properly marked for client side caching. YSlow (Firefox plugin) is your friend here, anything less than an A rating in one of their categories means you need to evaluate it.
All third party libraries are evaluated on several things: do they actually do what we want; how much larger do they make my page; is there a better way? One prime example is DevExpress. At least last year (not sure of any changes in the last 8 months) their client portion resulted in 1MB of javascript being downloaded. Again, not worth it.
We tend to use very few images. It's amazing what you can do to style a button using a little bit of css.
We also tend to minimize the use of javascript. We only use it where we get the most bang for the buck. Yes, we do have some pages that do drag/drop; and others that use Ajax. However, most users just don't care, so those things don't need to be everywhere.