As long as you do trivial amounts of work in your "main script" (the one you directly invoke with python
and which gets a __name__
of __main__
) you need not worry about "caching the pre-compiled python bytecode": when you import foo
, foo.py
gets saved to disk (same directory) as foo.pyc
, as long as that directory is writable by you, so the already-cheap compilation to bytecode happens once and "forever after" Python will load foo.pyc
directly in every new process that does import foo
-- within a single process, every import foo
except the first one is just a fast lookup into a dictionary in memory (the sys.module
dictionary). A core performance idea in Python: makes sure every bit of substantial code happens within def
statements in modules -- don't have any at module top level, in the main script, or esp. within exec
and eval
statements/expressions!-).
I have no benchmarks for PHP vs Python, but I've noticed that Python keeps getting optimized pretty noticeably with every new release, so make sure you compare a recent release (idealy 2.7, at least 2.6) if you want to see "the fastes Python". If you don't find it fast enough yet, cython
(a Python dialect designed to compile directly into C, and thence into machine code, with some limitations) is today the simplest way to selectively optimize those modules which profiling shows you need it.