views:

657

answers:

6

I'm looking at implementing a fuzzy logic controller based on either PyFuzzy (Python) or FFLL (C++) libraries.

I'd prefer to work with python but am unsure if the performance will be acceptable in the embedded environment it will work in (either ARM or embedded x86 proc both ~64Mbs of RAM).

The main concern is that response times are as fast as possible (an update rate of 5hz+ would be ideal >2Hz is required). The system would be reading from multiple (probably 5) sensors from an RS232 port and provide 2/3 outputs based on the results of the fuzzy evaluation.

Should I be concerned that Python will be too slow for this task?

+10  A: 

Python is very slow at handling large amounts of non-string data. For some operations, you may see that it is 1000 times slower than C/C++, so yes, you should investigate into this and do necessary benchmarks before you make time-critical algorithms in Python.

However, you can extend python with modules in C/C++ code, so that time-critical things are fast, while still being able to use python for the main code.

Lars D
+1 I would also, as a base-line, try to write it in python and replace slow parts with C if needed.
Lennart Regebro
+29  A: 

In general, you shouldn't obsess over performance until you've actually seen it become a problem. Since we don't know the details of your app, we can't say how it'd perform if implemented in Python. And since you haven't implemented it yet, neither can you.

Implement the version you're most comfortable with, and can implement fastest, first. Then benchmark it. And if it is too slow, you have three options which should be done in order:

  • First, optimize your Python code
  • If that's not enough, write the most performance-critical functions in C/C++, and call that from your Python code
  • And finally, if you really need top performance, you might have to rewrite the whole thing in C++. But then at least you'll have a working prototype in Python, and you'll have a much clearer idea of how it should be implemented. You'll know what pitfalls to avoid, and you'll have an already correct implementation to test against and compare results to.
jalf
+1 premature optimization is the root of all evil.
Tadeusz A. Kadłubowski
It is the root of all evil, and it's all over stack overflow.
FogleBird
Agree with jalf on all points. Also, if much of the work will be done via library calls, it doesn't matter very much whether you call the library from C or from Python. If you use built-in language features of Python, and C library modules, you may never even notice a performance hit for Python; if you try to write your own FFT in native Python or something like that, you will see horrible performance. Use Python to write the really high-level parts and try to let Python and C libraries do the heavy lifting.
steveha
@tkadlubo: I didn't use that quote because people too often misinterpret it. Performance matters, and optimization is important. Knuth's point was simply that in *most* of your code, there is nothing to be gained by optimizing, and until you know which parts of your code are bottlenecks, any attempt at optimizing is premature. Unfortunately, people often take it as an excuse to ignore performance and optimization.
jalf
@jalf: My view is that performance is always crucial, but in different circumstances different performance metrics are used. Often developers' time is the scarcest resource to optimize.
Tadeusz A. Kadłubowski
that's true (damn 15 char limit)
jalf
+5  A: 

Make it work, then make it work fast.

KevDog
Don't forget the middle step, "Make it work correctly."
JasCav
I put "make it work correctly" in my definition of done. But I hear you.
KevDog
+1  A: 

If most of your runtime is spent in C libraries, the language you use to call these libraries isn't important. What language are your time-eating libraries written in ?

peufeu
A: 

From your description, speed should not be much of a concern (and you can use C, cython, whatever you want to make it faster), but memory would be. For environments with 64 Mb max (where the OS and all should fit as well, right ?), I think there is a good chance that python may not be the right tool for target deployment.

If you have non trivial logic to handle, I would still prototype in python, though.

David Cournapeau
A: 

Hi,

I never really measured the performance of pyfuzzy's examples, but as the new version 0.1.0 can read FCL files as FFLL does. Just describe your fuzzy system in this format, write some wrappers, and check the performance of both variants.

For reading FCL with pyfuzzy you need the antlr python runtime, but after reading you should be able to pickle the read object, so you don't need the antlr overhead on the target.

René Liebscher