views:

1587

answers:

3

In many Enterprise System architectures, it becomes imperative to size the hardware according to concurrency & workload requirements. Mostly product vendors will provide their own hardware sizing sheets wherein you just plug in the metrics and it will throw out details of number of servers, RAM required and so on.

What I'd like to know is that how do we arrive at these sizes. I mean say if there is a concurrency requirement of 1000 users. What are the considerations that would make one finally say that 2 servers in a cluster are required for meeting this concurrency requirement.

What are the thumb rules determining how many users,threads etc a processor can handle (for x86, RISC etc). At the start of a project - how do you efficiently determine the sizing for an Enterprise System ?

+3  A: 

This is perhaps one of the toughest questions to answer. I was watching this thread with interest to see what others may have thought.

The answer needs to be made considering the hardware platform , the OS, application server, database server, etc that your product runs upon, as well as the relative complexity of your product. A site serving up static HTML will scale to many more users than an OLTP system.

Knowing the innate capabilities of your target platform is critical. Knowing that ASP.NET supports 12 concurrently executing threads/CPU (default config), that you can use output caching to greatly reduce concurrency, or more than 3,000 requests/sec requires gigabit Ethernet to the database server, etc. can help you plan properly, know what levers you have to pull, etc.

The vendor hardware sizing sheets are reflective of significant effort being spent doing performance and capacity testing of their product. This can be a tough sell for applications where you're doing in-house development, or not-for-profit development.

In short, your goal should be to start making POCs of the more complex areas of your product and then begin to invest in ongoing performance and capacity planning iterations.

If this isn't done, failure is likely to occur - failure to perform, failure to do cpacity planning, or business failure - the users never came.

I wish that I had a better answer - I am faced with this same challenge myself.

JohnW
A: 

If you have the luxury of not having to deploy to all 1,000 users on day one then I'd be tempted to use virtualisation to help you with this problem. I'd first build the servers on bare-metal and check they're functionally as you need. Then use whatever VM software you like's P2V converter to convert from Physical to Virtual disk image. I'd then remove the server's original disks and store them safely, stick some new ones inside, install your hypervisor of choice, add the converted VM, fire it up, add the para tools for your VM/OS and see how you get on. If your server works then what you've given yourself is portability. You can start with a hundred or so users, measure the load, extrapolate and make some assumptions. Then add more users, test your assumptions and so on. If you get to 1,000 users and there's room to spare then great, you can stay with the virtual environment (pros: good DR options, portability; cons: you lose some performance) or go back to the bare-metal build knowing it'll handle the work. If your tin's getting hot then you can either move the VM to bigger/better/faster hardware very easily or copy the VM to another physical VM host and cluster like that.

I know that doesn't answer your questions directly but I'm not sure there are rules of thumb for this really as there's a huge fluctuation in per-user-load based on so many factors.

If you have a month or so until you have to order your servers you might consider the new Nehalem-based Xeons - they're really worth the wait.

Chopper3
Thanks Chopper3. That was a pretty good overview of using Virtualisation to adaptively build up the required infrastructure. However what i'am looking for are easily available base metrics on user handling capacity, concurrency support, workload etc for various processor/server families.
gnlogic
A: 

Following points may help you

  • Choose a Measure for Sizing
  • Estimate Workload Estimate CPU
  • Sizing Model for Online Processing (Application and DB servers) Estimate
  • RAM Size Estimate Hard Disk Size
  • Estimate Network Bandwidth Sizing
  • Estimate Batch Processing Size Check
  • Hardware and Software vendors about their processing
learner