views:

539

answers:

5

In C/C++ I can allocate memory in one thread and delete it in another thread. Yet whenever one requests memory from the heap, the heap allocator needs to walk the heap to find a suitably sized free area. How can two threads access the same heap efficiently without corrupting the heap? (Is this done by locking the heap?)

+1  A: 

Yes, normally access to the heap has to be locked. Any time you have a shared resource, that resource needs to be protected; memory is a resource.

GMan
Even when each thread manages its own memory? That sounds horribly inefficient.
doron
Remember, correctness comes first, efficiency after.
Nikolai N Fetissov
@deus: No, but that's not the situation you described. You said the threads are sharing memory. (Deleting in another thread.)
GMan
A: 

This will depend heavily on your platform/OS, but I believe this is generally OK on major sytems. C/C++ do not define threads, so by default I believe the answer is "heap is not protected", that you must have some sort of multithreaded protection for your heap access.

However, at least with linux and gcc, I believe that enabling -pthread will give you this protection automatically...

Additionally, here is another related question:

http://stackoverflow.com/questions/796099/c-new-operator-thread-safety-in-linux-and-gcc-4

M. Esh.
+1  A: 

This is an Operating Systems question, so the answer is going to depend on the OS.

On Windows, each process gets its own heap. That means multiple threads in the same process are (by default) sharing a heap. Thus the OS has to thread-synchronize its allocation and deallocation calls to prevent heap corruption. If you don't like the idea of the possible contention that may ensue, you can get around it by using the Heap* routines. You can even overload malloc (in C) and new (in C++) to call them.

T.E.D.
+1  A: 

Yes an "ordinary" heap implementation supporting multithreaded code will necessarily include some sort of locking to ensure correct operation. Under fairly extreme conditions (a lot of heap activity) this can become a bottleneck; more specialized heaps (generally providing some sort of thread-local heap) are available which can help in this situation. I've used Intel TBB's "scalable allocator" to good effect. tcmalloc and jemalloc are other examples of mallocs implemented with multithreaded scaling in mind.

Some timing comparisons comparisons between single threaded and multithread-aware mallocs here.

timday
Just out of interest what are the malloc strategies for gcc and MSVC?
doron
Good question. Don't know much about MSVC's CRT but gcc is generally associated with glibc which uses ptmalloc: http://en.wikipedia.org/wiki/Malloc#dlmalloc_.28the_glibc_allocator.29 . The timings link above shows this scaling pretty well, which would explain why my own experiments with TBB's allocator have it sometimes making things better, sometimes worse.
timday
+1  A: 

I found this link.

Basically, the heap can be divided into arenas. When requesting memory, each arena is checked in turn to see whether it is locked. This means that different threads can access different parts of the heap at the same time safely. Frees are a bit more complicated because each free must be freed from the arena that it was allocated from. I imagine a good implementation will get different threads to default to different arenas to try to minimize contention.

doron