I work on a large financial pricing application in which some long running calculations. We have identified some functions which can be sped up by the selective application of psyco. My management have requested an assessment of the costs & benefits of adding psyco into our stack.
Given the critical nature of my project, it's not acceptable if a "performance enhancement" can potentially reduce reliability. I've read that using psyco gets additional performance at the cost of more memory used. I'm worried that this could be a problem.
I'm doing it like this:
@psyco.proxy
def my_slow_function(xxx):
In all, we expect to apply psyco to no more than 15 functions - these are used very heavily. There are thousands of functions in this library, so this is only affecting a tiny sub-set of of our code. All of the functions are small, mathematical and stateless.
- Is there likely to be a risk that this will use substantially more memory
- Are there any other problems we might encounter when adding this component to our long established library?
FYI, platform is Python 2.4.4 on Windows 32bit XP
UPDATE: It seems that the main potential risk is due to a program requiring more memory to run than before psyco was added, so ideally I'd like to find a way to see if adding psyco dramatically changes the memory requirements of the system.