tags:

views:

2506

answers:

6

What are some really good reasons to ditch the standard STL allocators for a custom solution? Have you run across any situations where it was absolutely necessary for correctness, performance, scalability, etc? Any really clever examples?

Custom allocators have always been a feature of the STL that I haven't had much need for. I was just wondering if anyone here on SO could provide some compelling examples to justify their existence.

+6  A: 

It can be useful to use custom allocators to use a memory pool instead of the heap. That's one example among many others.

For most cases, this is certainly a premature optimization. But it can be very useful in certain contexts (embedded devices, games, etc).

Martin Cote
+4  A: 

I haven't written C++ code with a custom STL allocator, but I can imagine a webserver written in C++, which uses a custom allocator for automatic deletion of temporary data needed for responding to a HTTP request. The custom allocator can free all temporary data at once once the response has been generated.

Another possible use case for a custom allocator (which I have used) is writing a unit test to prove that that a function's behavior doesn't depend on some part of its input. The custom allocator can fill up the memory region with any pattern.

pts
+12  A: 

As I mention here, I've seen Intel TBB's custom STL allocator significantly improve performance of a multithreaded app simply by changing a single

std::vector<T>

to

std::vector<T,tbb::scalable_allocator<T> >

(this is a quick and convenient way of switching the allocator to use TBB's nifty thread-private heaps; see page 7 in this document)

timday
Thanks for that second link. The use of allocators to implement thread-private heaps is clever. I like that this is a good example of where custom allocators have a clear advantage in a scenario that is not resource-limited (embed or console).
Naaff
+7  A: 

I'm working with a MySQL storage engine that uses c++ for its code. We're using a custom allocator to use the MySQL memory system rather than competing with MySQL for memory. It allows us to make sure we're using memory as the user configured MySQL to use, and not "extra".

Thomas Jones-Low
+11  A: 

One area where custom allocators can be useful is game development, especially on game consoles, as they have only a small amount of memory and no swap. On such systems you want to make sure that you have tight control over each subsystem, so that one uncritical system can't steal the memory from a critical one. Other things like pool allocators can help to reduce memory fragmentation. You can find a long, detailed paper on the topic at:

EASTL -- Electronic Arts Standard Template Library

Grumbel
+1 for EASTL link: "Among game developers the most fundamental weakness [of the STL] is the std allocator design, and it is this weakness that was the largest contributing factor to the creation of EASTL."
Naaff
+2  A: 

I'm using custom allocators here; you might even say it was to work around other custom dynamic memory management.

Background: we have overloads for malloc, calloc, free, and the various variants of operator new and delete, and the linker happily makes STL use these for us. This lets us do things like automatic small object pooling, leak detection, alloc fill, free fill, padding allocation with sentries, cache-line alignment for certain allocs, and delayed free.

The problem is, we're running in an embedded environment -- there isn't enough memory around to actually do leak detection accounting properly over an extended period. At least, not in the standard RAM -- there's another heap of RAM available elsewhere, through custom allocation functions.

Solution: write a custom allocator that uses the extended heap, and use it only in the internals of the memory leak tracking architecture... Everything else defaults to the normal new/delete overloads that do leak tracking. This avoids the tracker tracking itself (and provides a bit of extra packing functionality too, we know the size of tracker nodes).

We also use this to keep function cost profiling data, for the same reason; writing an entry for each function call and return, as well as thread switches, can get expensive fast. Custom allocator again gives us smaller allocs in a larger debug memory area.

leander