views:

208

answers:

2

I have a Linux app (written in C) that allocates large amount of memory (~60M) in small chunks through malloc() and then frees it (the app continues to run then). This memory is not returned to the OS but stays allocated to the process.

Now, the interesting thing here is that this behavior happens only on RedHat Linux and clones (Fedora, Centos, etc.) while on Debian systems the memory is returned back to the OS after all freeing is done.

Any ideas why there could be the difference between the two or which setting may control it, etc.?

A: 

Some mem handler dont present the memory as free before it is needed. It instead leaves the CPU to do other things then finalize the cleanup. If you wish to confirm that this is true, then just do a simple test and allocate and free more memory in a loop more times than you have memeory available.

eaanon01
Yes, I know why libc doesn't free the memory immediately. The question is why two Linux systems behave so differently and is there any way to control it deterministically?
StasM
+1  A: 

I'm not certain why the two systems would behave differently (probably different implementations of malloc from different glibc's). However, you should be able to exert some control over the global policy for your process with a call like:

mallopt(M_TRIM_THRESHOLD, bytes)

(See this linuxjournal article for details).

You may also be able to request an immediate release with a call like

malloc_trim(bytes)

(See malloc.h). I believe that both of these calls can fail, so I don't think you can rely on them working 100% of the time. But my guess is that if you try them out you will find that they make a difference.

Will Robinson