views:

132

answers:

2

In the past, and intermittently now, I've used simulation tools like Easy Java Simulations and NetLogo.

They are great tools for visually modeling various mathematical/comp-sci concepts because "all you have to do" is write the simulation loop - the graphics, etc are handled for you.

However, one thing I have noticed is that improving execution time / modeling speed is extremely difficult using such tools, because the guts of the implementation are hidden under the surface.

There is, generally-speaking, great documentation on how to use the simulator tools, but I haven't found anything on improving execution time.

For example, say you're implementing Newton's Method for root finding. It's a straight-forward algorithm, but depending on

  • the type of graphic attachment you use, or
  • various other miscellaneous options chosen

the simulation will run at different speeds.

Is there a way to determine an "optimal" display of the simulation's data?

I'm thinking of this in the case of using such a tool to teach classes about modeling/scientific programming.

+1  A: 

If all else fails, you can use a combination of these two approaches:

  • Second-guess the environment: ask yourself how YOU would implement its features, and then deduce which feature is probably going to require the least computational work.
  • Trial-and-error: just compare different methods by testing them out. It's a big help if the environment has some facility for timing your code, such as a function that (accurately) tells you what the time is now.

Don't forget about effects such as memory caching and optimisations. If you try to use a particular feature in a certain context it may run differently to your previous experience.

Artelius
+2  A: 

You may try to use the Repast Symphony agent simulation toolkit. It is a mature, free, open source programming environment with lots of useful features. You can integrate Repast with Eclipse what has a profiler plugin.

rics