views:

327

answers:

4

(EDIT: rewritten question to make it clearer, meaning hasn't changed)

You can create an application and measure its usage. But what I would like to know, if you decide up-front about an ASP.NET application, how much simultaneous users (sessions) fit into one machine typically.

Let's assume the following default simplified setup: Inproc sessions, ASP.NET 3.5, NHibernate + L2 caching, shopping site (basket properties in session).

While I could ascertain that the session won't raise above, say, 20kB, my experience shows me that there's a huge overhead in general, even with well-laid-out applications. I'm looking for that simple calculation you can make on a sticky note.

For the bounty: what CPU / Mem would you advise your management for each X simultaneous users, ignoring bandwidth requirements. I.e. and answer could be: on a 2GHz Xeon with 1GB mem, Win2k8, you can safely serve 500 simultaneous sessions, but above that it requires careful planning or more hardwarere

+6  A: 

do you know the "quality" of the code?

bad code can cost huge in hardware while good code can cost nothing

Update based on the comment

A few years ago, I had to maintain an apps badly done, it was using 500 megs ram(sometime 1.5gig) and was taking minutes to show stuff, I had to rewrite the whole thing and after that, it was only taking the necessary amount of memory(close to 10-15x less) and it was quick at showing stuff, I'm talking in millisecond here.

The number of loop and badly caching data in memory that was done wrong was... incredibly sad to look at. Just to tell you, I had 3 versions of a whole freaking database in memory(so 4 with the real db) and the code had to update all versions one after the other. Everything else in the apps was based on the versions in memory.

Anyway, in the end. I deleted 25 thousand lines of code.

Quality of the code IS important.

second update

found this, might be good

third update

In an application that I'm currently developing, asp.net 3.5 using linq to sql talking(of course) with sql server 2005. many read to the db and not so many write.

on my own dev machine which is old p4 prescott with 3 gig of ram. it take an average of 20ms to 100ms to load a whole page, depend which page :-)

session(memory usage) is very low, way under 20k for sure

if I go from here, my bad math would be;

If I have 100 simultaneous users, it would take about 2secs to load a page and it would use at least 2 meg of ram for the duration of the session.

the bad math needed? what do you need for 1 user and from that, just do 1 user multiply by WhatYouThinkYouShouldBeAbleToHandle

I don't think there is any other way to find out. Because again, the code under the page does matter.

Fredou
Good points. But just quality of the code is not a real measure (if measurable at all) towards use of CPU or memory.
Abel
@Abel, I updated my answer
Fredou
Thanks for the edit, +1 for that. I never meant that quality is **not** important. It's terribly important. However, bad code is often not well measurable. And good code can contain bad bugs (a recent silly example of a small threading issue causing unpredictable huge memory increases: very good code, just one bug, huge consequences, but I'm drifting...). Really, I was just after some rule of thumb here. 25K sessions on a 2Ghz/1GB machine? I never saw it (1.5K ok in good programs, .5K in worse ones), but where is the general threshold?
Abel
@Abel, I found something and updated my answer again
Fredou
Tx Fredou, but check the first sentence in my q. Measuring has been part of my job for 15 years. Measuring is more about solving performance problems "after the fact". But what story do you tell before you start building? What's the general "rule of thumb"?
Abel
@Abel, what do you know about the application itself. What does it do? If that unknown, I think you will need to adjust after finding out. A simple asp.net apps that communicate with sql server 2005 while using framework 3.5 doesn't need much. You could grab a desktop pc at bestbuy and it would work nicely. Do you expect lot of write(to the db) or view?
Fredou
I added a bounty. I'm really looking for a rule of thumb here, something you can use in premature negotiations or meetings. Before you really know what (exactly) and how you are going to build.
Abel
your "bad math" is an excellent example of how not to do it (which is probably why you call it that: bad math). Multi-threading, multi-core processors and IO waits all play a part and make it impossible to simply multiply. The thing is, there's even a number X simultaneous users for which the average response time will not change. Only above that X response time for each request will increase, and often exponentially so due to the "traffic jam" effect.
Abel
+1  A: 

It depends greatly on how much work you're doing at the server. Some apps might do 100's, others only 10's.

Joel Coehoorn
I agree, but 10s or 100s doesn't sound that it's reasonable to think of thousands or even tens of thousands, or is it?
Abel
+2  A: 

You obviously understand it depends on the app and the best way to understand what an app can do or support is to measure it. In the past I've used the transaction cost analysis methodology from Microsoft to get some fairly good estimates. I've used it back in the day with Site Server Commerce Edition 3.0 and today with modern ASP.net apps and it works fairly well.

This link is a snippet from "Improving .Net Application Performance and Scalability" book from Microsoft and it details formulas you can use with performance data (CPU usage, IIS counter etc) to calculate the number of users you can support on your site. I couldn't post a second link to the book but if you search for scalenet.pdf on Google/Bing you'll find it. Hope this helps

Ameer Deen
This is a very interesting link. I know all about measuring, but this gives a bit of PoC method which might end up useful. Not a rule of thumb, but the most useful answer so far.
Abel
+3  A: 

Since you're looking for an actual #, I'll provide you some. We are creating a secure HR application utilize ASP.NET MVC. Like you we wanted to get a good feeling for maximum concurrent connections, which we defined as the max # of pages served in 10 seconds (assumes that a user would not wait more than 10 seconds for a page).

Since we were looking for an upper bound, we used a very simple page. SSL + a few session variables. On a Dual Xeon quad core (8 cores total), with 16GB of memory and SQL Express as the backend, we were able to hit ~1,000 "concurrent" connections. Neither memory or SQL express were the limiting factors though, it was primarily processor & I/O for our test. Note that we did not use caching, although for a shopping cart I doubt you would either. This page hit the database ~3 times, and sent ~150KB of data (mostly png images, uncached). We verified that 1,000 sessions were created, although each was small.

Our POV is that 1,000 is likely unrealistic. Tests including pages with business logic and real user data showed ~200 concurrent users max. However, some users will also be running reports, which could chew up an entire core for up to 30 seconds. In this scenario, 9 concurrent reports users could basically make the system unsuitable for others. This is to the other poster's points ... you can grab other performance #s all you want, but your system might behave entirely differently based on what it's doing.

Jess
Thanks for your detailed answer. This is very useful indeed and does bring me closer to a "rule of thumb" figure, even though one could argue that it is very specific to only your situation (but that's what it'll always be, and we just adapt). I've some trouble deciding whether I should give you the bounty or Ameer Deen. While I consider both very useful, yours is closest to answering the actual question, ergo, I choose yours.
Abel