views:

311

answers:

2

I would like to performance test a typical web application. The application offers some web2.0 functionalities, like writing blogs, wikis, search for contents and something like this. I've analysed the access log and got know an unterstanding what the users are doing really frequently.

The gap in my brain is how to proceed? I thoght about the following methodology:

  • (A) split the functionality into transaction (write blogpost, view wikipage etc)
  • (B) run these transactions with increasing amount of users
  • (C) make some reports: "viewing wikipage with hardware x could be performed by y users concurrently, while memory is the bounding ressource"
  • (D) try to mix multiple transactions into ONE scenario, which should be a realistic mapping of the real user load
  • (E) run this scenario with increasing users, making the same reports as in C

What do you think about this and whats you methodology?

+1  A: 

The problem you're going to run into is how to test all that reproducibly. Tests that aren't reproducible (i.e. manual tests) are of strictly limited utility.

Take a look at watir (pronounced like "water"); it gives good coverage and is very scriptable.

MarkusQ
This question is not about tools, but methodology.
Mork0075
Methodology without tools is just philosophy. Further, what you are describing isn't so much a "methodology" as a test plan, and even at that it's rather vague. I'd be willing to bet that you'll adjust you test plan as you go, and your choice of tool will outlive your guesses about the test plan.
MarkusQ
I don't think so. HP LoadRunner is the tool of choice. This is absolutely independent from how cut the complete functionality into a representative sample to achive realistic predictions in how the application behaves under load. Its the same with unit testing, you cant test anything.
Mork0075
+1  A: 

You first need to know which activities can be performed on your site, and more importantly need some idea of what proportion of your total traffic that activity takes up. For a simple blog you might say it looks like this:

  • reading index page : 30%
  • reading post pages: 65%
  • creating comments: 4%
  • creating posts: 1%

You can then use some sort of testing framework to simulate this load and understand how many requests per minute second you can sustain. This will give you some hard number on capacity. You can also profile your memory/cpu/network/whatever to see how they are utilised during this time.

However it really is important not to skip on the actual usability testing. In standard dynamic website thinsg will start to feel slow if it takes more than about half a seond to load a page. In an AJAX enabled site you will find that the increased amount of feedback available to the use gives them a higher tolerance for latency and the limits on what is acceptable will need to be investigated by human in order to tell.

Jack Ryan
Thanks! I've got these statistical values from a logfile analysis. So you would put all these transaction (with their certain distribution) into one scenario and run it? Would you also run single transactions, perhaps only reading index posts with perhaps 2.000 users. Does this create value?
Mork0075