On my 64-bit Debian/Lenny system (4GByte RAM + 4GByte swap partition) I can successfully do:
v=array(10000*random([512,512,512]),dtype=np.int16)
f=fftn(v)
but with f being a np.complex128
the memory consumption is shocking, and I can't do much more with the result (e.g modulate the coefficients and then f=ifftn(f)
) without a MemoryError
traceback.
Rather than installing some more RAM and/or expanding my swap partitions, is there some way of controlling the scipy/numpy "default precision" and have it compute a complex64 array instead ?
I know I can just reduce it afterwards with f=array(f,dtype=np.complex64)
; I'm looking to have it actually do the FFT work in 32-bit precision and half the memory.