views:

332

answers:

4

Possible Duplicate:
Any reason to overload global new and delete?

In what cases does it make perfect sense to overload operator new?

I heard you do it in a class that is very often allocated using new. Can you give an example?

And are there other cases where you would want to overload operator new?

Update: Thanks for all the answers so far. Could someone give a short code example? That's what I meant when I asked about an example above. Something like: Here is a small toy class, and here is a working operator new for that class.

+2  A: 

The best case I found was to prevent heap fragmentation by providing several heaps of fixed-sized blocks. ie, you create a heap that entirely consists of 4-byte blocks, another with 8 byte blocks etc.

This works better than the default 'all in one' heap because you can reuse a block, knowing that your allocation will fit in the first free block, without having to check or walk the heap looking for a free space that's the right size.

The disadvantage is that you use up more memory, if you have a 4-byte heap and a 8-byte heap, and want to allocate 6 bytes.. you're going to have to put it in the 8-byte heap, wasting 2 bytes. Nowadays, this is hardly a problem (especially when you consider the overhead of alternative schemes)

You can optimise this, if you have a lot of allocations to make, you can create a heap of that exact size. Personally, I think wasting a few bytes isn't a problem (eg you are allocating a lot of 7 bytes, using an 8-byte heap isn't much of a problem).

We did this for a very high performance system, and it worked wonderfully, it reduced our performance issues due to allocations and heap fragmentation dramatically and was entirely transparent to the rest of the code.

gbjbaanb
Not sure what operating system you tried this on. But the Windows heap manager already uses this approach. I'd be surprised if others don't do it this way.
Hans Passant
C++ memory management is optimised for small blocks that appear and disappear quickly. Trying to re-optimize it is likely not going to work for your average programmer. Also the description you provide above is a very common scheme used by nearly all memory management systems. Re-implementing it will provide no benifit.
Martin York
Up to what size does the small block allocation optimization work, in general? I'm interested in recent versions of GCC in particular.
Emile Cormier
Found my answer (for Linux) in ptmalloc -> malloc.c. It says "For small (<= 64 bytes by default) requests, it is a caching allocator, that maintains pools of quickly recycled chunks."
Emile Cormier
A: 

You do it in cases you want to control allocation of memory for some object. Like when you have small objects that would only be allocated on heap, you could allocate space for 1k objects and use the new operator to allocate 1k objects first, and to use the space until used up.

daramarak
+2  A: 

Some reasons to overload operator new

  1. Tracking and profiling of memory, and to detect memory leaks
  2. To create object pools (say for a particle system) to optimize memory usage
Extrakun
+5  A: 

Some reasons to overload per class
1. Instrumentation i.e tracking allocs, audit trails on caller such as file,line,stack.
2. Optimised allocation routines i.e memory pools ,fixed block allocators.
3. Specialised allocators i.e contiguous allocator for 'allocate once on startup' objects - frequently used in games programming for memory that needs to persist for the whole game etc.
4. Alignment. This is a big one on most non-desktop systems especially in games consoles where you frequently need to allocate on e.g. 16 byte boundaries
5. Specialised memory stores ( again mostly non-desktop systems ) such as non-cacheable memory or non-local memory

zebrabox