views:

237

answers:

6

In a game that I am writing, I use a 2D vector class which I have written to handle the speeds of the objects. This is called a large number of times every frame as there are a lot of objects on the screen, so any increase I can make in its speed will be useful.

It is pretty simple, consisting mostly of wrappers to the related math functions. It would be quite trivial to rewrite in C, but I am not sure whether doing so will make any significant difference as all it really does is call the underlying math functions, add, multiply or divide.

So, my question is under what circumstances does it make sense to rewrite in C? Where will you see a significant speed boost, and where can you see a reasonable speed boost without rewriting an extensive amount of the program?

+13  A: 

If you're vector-munging, give numpy a try first. Chances are you will get speeds not far from C if you utilize numpy's vector manipulation functions wisely.

Other than that, your question is very heuristic. If your code is too slow:

  1. Profile it - chances are you'll be able to improve it in Python
  2. Use the correct optimized C-based libraries (numpy in your case)
  3. Try psyco
  4. Try rewriting parts with cython
  5. If all else fails, rewrite in C
Eli Bendersky
This is... what?
Dominic Rodger
@Dominic, edited out. Thanks
Eli Bendersky
A: 

Never. Use Cython if you must.

Ignacio Vazquez-Abrams
"never" is a bit too harsh, of course
Eli Bendersky
-1 for saying 'never'
George Edison
That's the purest example of 'python fanboysm' a trend that is poisoning programming.
Andrei Ciobanu
+1 writing the **whole** module in C is pointless, Cython does that for you just as well and lets you concentrate on the real thing, which is rewriting a function or class to be faster. It also makes trivial calling pure C/C++ functions, so if its generated code is not good enough it's still very useful to generate the glue code.
Luper Rouch
+9  A: 

First measure then optimize

Martin Beckett
Amen. Optimization without benchmarks is just voodoo. Designing with voodoo leads to bad software. That said, developing some intuition about where to benchmark is important, and that is sort of what this question is asking.
Andy Ross
Yes - I assumed other people with more personal experience would weigh in with numpy, boost:python, cython etc. I just thought I would hammer home the point
Martin Beckett
-1 'Chances are you will get speeds not far from C if you utilize numpy's vector manipulation functions wisely.' Where 'not far' actually means: 'not even close, but somewat satisfactory'.
Andrei Ciobanu
A: 

A nice Profiler I use on Linux is pycallgraph - however, as your program gets bigger it starts to create much larger images which are harder to trace. I'm pretty sure you can exclude modules, though.

Steven Sproat
A: 

Common wisdom is "profile", "measure", etc. Well - maybe. Just get in the debugger and take 10 stackshots. If more than one of them terminates in your wrapper code, then it is costing more than 10% roughly, so you should consider re-doing it in C, to save that time. Chances are you will find other things also that are costing more than that.

Mike Dunlavey
+1  A: 

You should never optimize anything, be it in C or any other language, without timing your code before and after your optimization:

  • your clever optimization could in fact induce a slow down
  • optimizing something that takes 1% of the total execution time will never give you more than 1% performance

The common approach is:

  1. profile your code
  2. identify a hotspot
  3. time this hotspot
  4. optimize it
  5. time the hotspot again, see if it's faster. If it's not goto 3.

If you can't find hotspots it could mean that your app is already optimized, or that you are not using the good algorithm for your problem. In both cases profiling helps understanding what your code does.

For profiling python code under Linux, you can use pyprof2calltree which works in conjunction with kcachegrind, and is totally awesome.

Luper Rouch