views:

25

answers:

2

We're implementing a new solution in our classic ASP environment that's using COM interop to instatiate a .NET component to do some work. The solution is working great and our team lead wants to make sure that it will perform well under load since this is the first time we'll be working with .NET components on our site.

What do I need to consider to do a proper test to make sure that the new code I introduced won't break under load?


What I'm already doing:

I made a simple asp web page that calls the new component based on the information in the query string. The query string also has an "off switch" parameter so I can test the page without the component as a default. The page itself is bare except for returning a value that represents whether or not the component was a success, failed, or was skipped.

I wrote a simple console application that uses an HttpWebRequest to make iterative calls to the ASP page using unique data from the database on each call. All it does and check the value that the page returns and save the data.

I then deployed the console application to four different PCs on our internal network. In one test, I set up one instance of the application on each of the computers, and in another instance, I set up five instances of the application on each machine and configured them to all begin hitting the development server at the same time. Each instance of the application made 150 iterative requests to the web server.

For each scenario (one instance on each machine & five instances on I each machine), I ran the test twice, once without the component being called, and once with the component being called. In both scenarios, performance with the component being called took about 2.2 times as long to complete the request than when the component wasn't called. We thought that wasn't very expensive considering the amound of processing we were doing and the number of trips being made to the database to update the data. Also, because the 2.2x ratio seemed consisent in both the scenario where we hit the server with 4 concurrent connections and when we hit the server with 20 concurrent connections, it seems to be operating OK.

The 20-instance tests certainly put a lot of load on our development server both with and without the new component running, but the new component seemed to fair well enough under stress. However, I want to make sure that I went about this the right way and am not pointing to a positive pass of a weak test to prove that my component won't bring the server to its need under peak load.

+1  A: 

There are companies out there like Push-to-Test and Gomez that we've used to prove that large-scale applications will work. Both simulate large numbers of users that take a specified path through your application. (In the case of Gomez, they have actual users' machines that they pay pennies to the owners of run the tests.) They can simulate thousands of concurrent users and provide other services, as well, such as uptime monitoring.

Both are paid services, but the software that Push-to-Test is based on Selenium, so you may be able to build (or find) a load-test framework built on that.

mattdekrey
+1  A: 

Visual Studio has a load testing component, but what you have done is essentially the same thing. The benefit of the Visual Studio solution is the instrumentation in that you can see where the likely bottleneck of performance is.

The key to you passing or failing the test should not be "Is the time to complete ratio okay?", but "Is the total response time okay compared to the given load?". The comparison with the new component is therefore relevant.

Your test seems to be rigorous enough to say "it does not break under load", but tested in isolation of the live hardware and any other processing from a real page, it is not a guarantee.

Did you measure the total number of requests per second? If that value is much higher than you would expect for a real application under peak load and the response time is reasonable then you do have some confidence in the results of the test as being a true positive.

Nat
@Nat THanks for your response. I measured average time to complete requests, so I guess you could say I measured requests per second. I guess my point was that introducing my component increased the response time by the same percentage both under a heavy and a light load.
Ben McCormack
Also, isn't the testing component only in VS 2010 Ultimate Edition, which is a $10,000 SKU? I remember Richard doing a pretty cool demo of it on the dotNetRocks Road Show.
Ben McCormack
It is also in 2008 Test Edition.Having the same ratio indicates it is a predictable amount of effort which is a pretty good sign, but the ultimate measure is going to be "did it take too long" in absolute terms for whatever too long is. If you have really hammered the component and it still responsd withing X seconds (whatever you figure a decent response time needs to be) , you can give it the "tick".
Nat