Question:
- How is computer simulation, typically resource intensive?
For instance, Simul8: a Discrete Event Simulation package - why is this computationally intensive, what factors (calculations) contribute to this?
Question:
For instance, Simul8: a Discrete Event Simulation package - why is this computationally intensive, what factors (calculations) contribute to this?
Computer simulation typically runs multiple scenarios rapidly and compare them.
For example, financial simulations typically run with a monte-carlo simulation with many thousands of runs.
A simulation can typically involve over 10,000 evaluations of the model, a task which in the past was only practical using super computers. -http://www.vertex42.com/ExcelArticles/mc/MonteCarloSimulation.html
Discrete event simulation is an extremely broad term; you can simulate anything from a lemonade stand, to a multinational business' transactions and logistics, to complex software systems, to novel computer architectures that do not yet exist (and are much more complex and advanced than the machine on which the simulation runs).
I'll use an example from my field (computer architecture), but the ways in which it is computationally expensive should generalize fairly well. Many times, you are trying to simulate a distributed system, which many somewhat independent agents with their own simpler control logic, which together implement a very complex dynamic. In the case of computing systems, the combined working set of the simulator is at least as large as the architectural, microarchitectural, and memory state of all of the constituent components combined. If each component is even modestly complex, this means that your temporal and spatial locality as you complete each timestep of the simulation is drastically decreased. The poor cache utilization implied by needing to run through the entire working set each timestep can affect performance by one to two orders of magnitude. This pattern is unavoidable, as running each component independently for multiple timesteps and only merging the results periodically is problematic, and more so the more complex and coupled your system is.
Additionally, you often want to keep all kinds of statistics which introduce considerable additional space and time overhead above the component simulation.
In short, your lower bound is the sum of the complexity of all the components of your simulation. In practice, there is a lot of inefficiency introduced if you have many components, if your components are more complex or even substantially different than the host machine on which the simulation runs, and if you have any significant amount of instrumentation.
One last thing: discrete event simulation often involves placing items in queues and finding which queue in which to place a request based on chasing a bunch of pointers. These operations are difficult to parallelize, complicating matters further. However, as I mentioned earlier, the term "discrete event simulation" can encompass anything from the trivial to the impossible, so extracting general patterns is difficult.