views:

39

answers:

2

Is there a way to globally trap MemoryError exceptions so that a library can clear out caches instead of letting a MemoryError be seen by user code?

I'm developing a memory caching library in Python that stores very large objects, to the point where it's common for users to want to use all available RAM to simplify their scripts and/or speed them up. I'd like to be able to have a hook where the python interpreter asks a callback function to release some RAM as a way of avoiding a MemoryError being invoked in user code.

OS: Solaris and/or Linux

Python: cPython 2.6.*


EDIT: I'm looking for a mechanism that wouldn't be handled by an except block. If there would be a memory error in any code for any reason, I'd like to have the Python interpreter first try to use a callback to release some RAM and never have the MemoryError exception ever generated. I don't control the code that would generate the errors and I'd like my cache to be able to aggressively use as much RAM as it wants, automatically freeing up RAM as it's needed by the user code.

+1  A: 

A MemoryError is an exception, you should be able to catch it in an except block.

Ranieri
I've attempted to clarify the question. I'd like my library to intercept the creation of a MemoryError in any code and that code wouldn't have any try...except blocks in it.
Mr Fooz
+2  A: 

This is not a good way of handling memory management. By the time you see MemoryError, you're already in a critical state where the kernel is probably close to killing processes to free up memory, and on many systems you'll never see it because it'll go to swap or just OOM-kill your process rather than fail allocations.

The only recoverable case you're likely to see MemoryError is after trying to make a very large allocation that doesn't fit in available address space, only common on 32-bit systems.

If you want to have a cache that frees memory as needed for other allocations, it needs to not interface with errors, but with the allocator itself. This way, when you need to release memory for an allocation you'll know how much contiguous memory is needed, or else you'll be guessing blindly. It also means you can track memory allocations as they happen, so you can keep memory usage at a specific level, rather than letting it grow unfettered and then trying to recover when it gets too high.

I'd strongly suggest that for most applications this sort of caching behavior is overcomplicated, though--you're usually better off just using a set amount of memory for cache.

Glenn Maynard