views:

129

answers:

2

I have a system to which I must apply load for the purpose of performance testing. Some of the load can be created via LoadRunner over HTTP.

However in order to generate realistic load for the system I also need to simulate users using a command line tool which uses a non HTTP protocol* to talk to the server.

* edit: actually it is HTTP but we've been advised by the vendor that it's not something easy to record/script and replay. So we're limited to having to invoke it using the CLI tool.

I have the constraint of not having the licences for LoadRunner to do this and not having the time to put the case to get the license.

Therefore I was wondering if there was a tool that I could use to control the concurrent execution of a collection of shell scripts (it needs to run on Solaris) which will be my transactions. Ideally it would be able to ramp up in accordance with a predetermined scehdule.

I've had a look around and can't tell if JMeter will do the trick. It seems very web oriented.

A: 

If all you need is starting a bunch of shell scripts in parallel, you can quickly create something of your own in perl with fork, exec and sleep.

#!/usr/bin/perl

for $i (1..1000)
{   
    if (fork == 0)
    {
        exec ("script.sh");
        exit;
    }

    sleep 1;
}
amarillion
I know that I could fall back on this kind of arrangement, but ideally I'd like to be able to get configure it to ramp up according to a predefined schedule. For example: start with 50 instances of `script.sh` running for 5 minutes, then after that time execute a further 50, and so on. I can code that in perl too but I'm kinda hoping that there's something that can do this for me :)
Tom Duckering
A: 

For anyone interested I have written a Java tool to manage this for me. It references a few files to control how it runs:

1) Schedules File - defines various named lists of timings which controls the length of sequential phases.

e.g. MAIN,120,120,120,120,120

This will result in a schedule named MAIN which has 5 phases each 120 seconds long.

2) Transactions File - defines transactions that need to run. Each transaction has a name, a command that should be called, boolean controlling repetition, integer controlling pause between repetitions in seconds, data file reference,schedule to use and increments.

e.g. Trans1,/path/to/trans1.ksh,true,10,trans1.data.csv,MAIN,0,10,0,10,0

This will result in a transaction running trans1.ksh, repeatedly with a pause of 10 seconds between repetitions. It will reference the data in trans1.data.csv. During phase 1 it will increment the number of parallel invocations by 0, phase 2 will add 10 parallel invocations, phase 3 none added and so on. Phase times are taken from the schedule named MAIN.

3) Data Files - as referenced in the transaction file, this will be a CSV with a header. Each line of data will be passed to subsequent invocations of the transaction.

e.g.

HOSTNAME,USERNAME,PASSWORD
server1,jimmy,password123
server1,rodney,ILoveHorses

These get passed to the transaction scripts via environment variables (e.g. PASSWORD=ILoveHorses), a bit klunky, but workable.

My Java simply parses the config files, sets up a manager thread per transaction which itself takes care of setting up and starting executor threads in accordance with the configuration. Managers take care of adding executors linearly so as not to totally overload it.

When it runs, it just reports every second on how many workers each transaction has running and which phase it's in.

It was a fun little weekend project, it's certainly no load runner and I'm sure there are some massive flaws in it that I'm currently blissfully unaware of, but it seems to do ok.

So in summary the answer here was to "roll ya own".

Tom Duckering