views:

394

answers:

1

Hello,

I'm stuck trying to implement a single server queue. I've adapted some pseudocode from Norm Matloff's Simpy tutorial to Python and the code is here. Now I am struggling to find some way to calculate the mean waiting time of a job/customer.

At this point my brain has tied itself into a knot! Any pointers, ideas, tips or pseudocode would be appreciated.

+5  A: 

You should know when each customer arrived in the queue. When they arrive at the server you should add one to the number of customers served as well as accumulate the amount of time he waited. At the end of the simulation you simply divide the accumulated time by the number of customers and you have a mean wait time for the job/customer.

The core problem is in accounting for different events and updating statistics based on those events.

Your simulation should initialize all of the structures of your simulation into a reasonable state:

  • Initialize the queue of customers to no one in it
  • Initialize any count of served customers to 0
  • Initialize any accumulated wait times to 0
  • Initialize the current system time to 0
  • Etc.

Once all the system has been initailized you create an event that a cusotmer arrives. This will normally be determined by some given distribution. Generating system events will need to update the statistics of the system. You have a choice at this point of generating all of the job/customers arrival times. The service time of each customer is also something that you will generate from a given distribution.

You must then handle each event and update the statistics accordingly. For example when the first customer arrives the queue has been empty from the time the simulation started to the current time. The average number of customers in the queue is likely a parameter of interest. You should accumulate the 0 * elapsed seconds into an accumulator. Once the customer arrives at the empty queue you should generate the service time. Either the next customer will arrive before or after the given job finishes. If the next cusomter arrives before the previous one has been serviced then you add him into the queue (accumulating the fact no one has been waiting). Depending on what event occurs next you must accumulate the statistics that occur in that time interval. The idle time of the server is also a parameter of interest in such simulations.

To make things more clear consider the fact there are 18 people in line and the server has completed a job for the first customer. The interval of between the arrival of the 18th customer and the time the first persons job is complete is a weighted average to be added to an accumulator. There have been 18 people in line for 4 seconds for example.

The server has not been idle so you should take an entry off the queue and start processing the next job. The job will take some amount of time usually defined from some distribution. If the next customer arrives before the current job is finished the fact 17 people were in line would be added to your weighted value.

Again at the fundamental level you are accumulating statistics between relevant events in your system:

while (current_time < total_simulation_time)
      handle_next_event
      generate_subsequent_events
      accumulate_statistics
      update_current_time
endwhile

Display "Average wait time: " accumulated_wait_time / number_of_customers_served

Hope that helps it seems a bit longwinded.

ojblass
Thank you for your wonderful explanation. Although I -think- I understand the theory and ideas behind a simulation, the implementation is surprisingly difficult. I find it hard to keep all the details in my head and put them together in a way that will do something useful. Thanks again :-)
vinc456
yw... glad to share! I never bothered to learn any of them but there are some packages to make this easier; however, it is a good practice to feel the burn of doing one of these from scratch before branching out.
ojblass