Stress testing is something that gets very little love in most death mar.., er I mean web projects. It's usually done at the last minute (or not at all), next to no time gets allocated to it, etc.
In the past I've picked a tool, installed it on my machine first, hit the home page, upping the concurrency settings. Then I'd write a simple login script, a simple site walkthrough (in an ecommerce site adding a few items to a cart and checkout). Then I'd rope in as many developers as I could, install the stress test tool on their machines and do a massive attack. If nothing broke I'd give up and wait for the actual traffic to kill the site for real.
Just hitting the homepage hard almost always would locate a major problem. More scalability problems would surface at the second stage, and even more - after the launch.
The tools I used were Microsoft Homer (aka Microsoft Web Application Stress Tool) and Pylot.
The reports that these tools generated never made much sense to me, and I spent many hours trying to figure out what kind of concurrent load the site would be able to support. It was always worth it because the stupidest bugs and bottlenecks would always come up (like web server misconfigurations, for instance).
What have you done, what tools have you used, and what success have you had with your approach? The part that is most interesting to me is coming up with some kind of a meaningful formula for calculating the number of concurrent users an app can support from the numbers reported by the stress test application.