views:

1559

answers:

17

Unless you're programming parts of an OS or an embedded system are there any reasons to do so? I can imagine that for some particular classes that are created and destroyed frequently overloading memory management functions or introducing a pool of objects might lower the overhead, but doing these things globally??

Addition I've just found a bug in an overloaded delete function -- memory wasn't always freed. And that was in a not-so memory critical application. Also, disabling these overloads decreases performance by ~.5% only.

+8  A: 

UnrealEngine3 overloads global new and delete as part of its core memory management system. There are multiple allocators that provide different features (profiling, performance, etc.) and they need all allocations to go through it.

Edit: For my own code, I would only ever do it as a last resort. And by that I mean I would almost positively never use it. But my personal projects are obviously much smaller/very different requirements.

280Z28
sure, game development is quite a special area. One would have to overload new/delete globally for, say, applications targeted at special multi-core architecture, etc..
MadH
+6  A: 

Some realtime systems overload them to avoid them being used after init..

cwap
+22  A: 

The most common reason to overload new and delete are simply to check for memory leaks, and memory usage stats. Note that "memory leak" is usually generalized to memory errors. You can check for things such as double deletes and buffer overruns.

The uses after that are usually memory-allocation schemes, such as garbage collection, and pooling.

All other cases are just specific things, mentioned in other answers (logging to disk, kernel use).

GMan
that sounds like a great reason!
MadH
Although this is probably the most common reason/answer, I think vhanda gave a more complete response.
Mike
Did you downvote me for it?
GMan
@Mike, I'd say he gave _longer_ answer, this one is very good too (and was given before the bounty ;-)
MadH
+3  A: 

You need to overload them when the call to new and delete doesn't work in your environment.

For example, in kernel programming, the default new and delete don't work as they rely on user mode library to allocate memory.

Edouard A.
As I said, I'm not programming OS kernels anymore ;-)
MadH
A: 

Here it is another reason InformIt C++

fco.javier.sanz
I don't understand the downvote ...
fco.javier.sanz
+2  A: 

From a practical standpoint it may just be better to override malloc on a system library level, since operator new will probably be calling it anyway.

On linux, you can put your own version of malloc in place of the system one, as in this example here:

http://developers.sun.com/solaris/articles/lib_interposers.html

In that article, they are trying to collect performance statistics. But you may also detect memory leaks if you also override free.

Since you are doing this in a shared library with LD_PRELOAD, you don't even need to recompile your application.

Juan
@Juan will interposers work on Windows ?
sameer karjatkar
I asked the question here. And it looks like there is a way.http://stackoverflow.com/questions/1210533/interposers-on-windows
Juan
+2  A: 

I've seen it done in a system that for 'security'* reasons was required to write over all memory it used on de-allocation. The approach was to allocate an extra few bytes at the start of each block of memory which would contain the size of the overall block which would then be overwritten with zeros on delete.

This had a number of problems as you can probably imagine but it did work (mostly) and saved the team from reviewing every single memory allocation in a reasonably large, existing application.

Certainly not saying that it is a good use but it is probably one of the more imaginative ones out there...

* sadly it wasn't so much about actual security as the appearance of security...

macbutch
that one is actually reasonable. in some (paranoid) systems you are required to overwrite the freed memory few times :-)
MadH
+1  A: 

NO!

I've done it once because I was working with a really really old C++ compiler (You don't wanna know, trust me!) and it didn't offer a way to overload a class's operator new.

Otherwise, I think it could be done if you want to -

  1. Implement some kind of garbage collector. Although I would again try to overload a class's operator new instead of doing it globally. That way I can choose which classes are managed are which aren't.

  2. Gather usage statistics. I've heard/read somewhere (Can't remember where) that if you were working on a specialized system you could overload the global operator new and compute what size of memory blocks are most commonly allocated. And then use that information to optimize something.

  3. Finding memory leaks, and you don't want to use something like Valgrind. Then I guess it would make sense to create a linked list with all the allocated memories and remove them from the list via operator delete.

The ONLY other reason I can think of is some kind of security, but macbutch has already mentioned that. All of the above stated reasons could easily be covered using a class's *operator new", so I really don't think there is any need.

vhanda
+1 cause this hits a few points that GMan and others missed.
Mike
+1  A: 

Photoshop plugins written in C++ should override operator new so that they obtain memory via Photoshop.

Ben Lings
+2  A: 

I've done it with memory mapped files so that data written to the memory is automatically also saved to disk.
It's also used to return memory at a specific physical address if you have memory mapped IO devices, or sometimes if you need to allocate a certain block of contiguous memory.

But 99% of the time it's done as a debugging feature to log how often, where, when memory is being allocated and released.

Martin Beckett
Thanks. Writing to the file might be useful on debug stages indeed.Allocating memory at specific physical address again applies only to embedded systems and such, not a general purpose software.
MadH
+2  A: 

Overloading new & delete makes it possible to add a tag to your memory allocations. I tag allocations per system or control or by middleware. I can view, at runtime, how much each uses. Maybe I want to see the usage of a parser separated from the UI or how much a piece of middleware is really using!

You can also use it to put guard bands around the allocated memory. If/when your app crashes you can take a look at the address. If you see the contents as "0xABCDABCD" (or whatever you choose as guard) you are accessing memory you don't own.

Perhaps after calling delete you can fill this space with a similarly recognizable pattern. I believe VisualStudio does something similar in debug. Doesn't it fill uninitialized memory with 0xCDCDCDCD?

Finally, if you have fragmentation issues you could use it to redirect to a block allocator? I am not sure how often this is really a problem.

Chris Masterton
+2  A: 

It's actually pretty common for games to allocate one huge chunk of memory from the system and then provide custom allocators via overloaded new and delete. One big reason is that consoles have a fixed memory size, making both leaks and fragmentation large problems.

Usually (at least on a closed platform) the default heap operations come with a lack of control and a lack of introspection. For many applications this doesn't matter, but for games to run stably in fixed-memory situations the added control and introspection are both extremely important.

Dan Olson
+9  A: 

In addition to the other important uses mentioned here, like memory tagging, it's also the only way to force all allocations in your app to go through fixed-block allocation, which has enormous implications for performance and fragmentation.

For example, you may have a series of memory pools with fixed block sizes. Overriding global new lets you direct all 61-byte allocations to, say, the pool with 64-byte blocks, all 768-1024 byte allocs to the the 1024b-block pool, all those above that to the 2048 byte block pool, and anything larger than 8kb to the general ragged heap.

Because fixed block allocators are much faster and less prone to fragmentation than allocating willy-nilly from the heap, this lets you force even crappy 3d party code to allocate from your pools and not poop all over the address space.

This is done often in systems which are time- and space-critical, such as games. 280Z28, Meeh, and Dan Olson have described why.

Crashworks
Thanks for detailed answer!
MadH
nb Leander explores this in much greater depth below.
Crashworks
+2  A: 

It can be a nice trick for your application to be able to respond to low memory conditions by something else than a random crash. To do this your new can be a simple proxy to the default new that catches its failures, frees up some stuff and tries again.

The simplest technique is to reserve a blank block of memory at start-up time for that very purpose. You may also have some cache you can tap into - the idea is the same.

When the first allocation failure kicks in, you still have time to warn your user about the low memory conditions ("I'll be able to survive a little longer, but you may want to save your work and close some other applications"), save your state to disk, switch to survival mode, or whatever else makes sense in your context.

Nicolas Simonet
+28  A: 

We overload the global new and delete operators where I work for many reasons:

  • pooling all small allocations -- decreases overhead, decreases fragmentation, can increase performance for small-alloc-heavy apps
  • framing allocations with a known lifetime -- ignore all the frees until the very end of this period, then free all of them together (admittedly we do this more with local operator overloads than global)
  • alignment adjustment -- to cacheline boundaries, etc
  • alloc fill -- helping to expose usage of uninitialized variables
  • free fill -- helping to expose usage of previously deleted memory
  • delayed free -- increasing the effectiveness of free fill, occasionally increasing performance
  • sentinels or fenceposts -- helping to expose buffer overruns, underruns, and the occasional wild pointer
  • redirecting allocations -- to account for NUMA, special memory areas, or even to keep separate systems separate in memory (for e.g. embedded scripting languages or DSLs)
  • garbage collection or cleanup -- again useful for those embedded scripting languages
  • heap verification -- you can walk through the heap data structure every N allocs/frees to make sure everything looks ok
  • accounting, including leak tracking and usage snapshots/statistics (stacks, allocation ages, etc)

The idea of new/delete accounting is really flexible and powerful: you can, for example, record the entire callstack for the active thread whenever an alloc occurs, and aggregate statistics about that. You could ship the stack info over the network if you don't have space to keep it locally for whatever reason. The types of info you can gather here are only limited by your imagination (and performance, of course).

We use global overloads because it's convenient to hang lots of common debugging functionality there, as well as make sweeping improvements across the entire app, based on the statistics we gather from those same overloads.

We still do use custom allocators for individual types too; in many cases the speedup or capabilities you can get by providing custom allocators for e.g. a single point-of-use of an STL data structure far exceeds the general speedup you can get from the global overloads.

Take a look at some of the allocators and debugging systems that are out there for C/C++ and you'll rapidly come up with these and other ideas:

(One old but seminal book is Writing Solid Code, which discusses many of the reasons you might want to provide custom allocators in C, most of which are still very relevant.)

Obviously if you can use any of these fine tools you will want to do so rather than rolling your own.

There are situations in which it is faster, easier, less of a business/legal hassle, nothing's available for your platform yet, or just more instructive: dig in and write a global overload.

leander
Nice specific examples.
quark
A: 

The most common use case is probably leak checking.

Another use case is when you have specific requirements for memory allocation in your environment which are not satisfied by the standard library you are using, like, for instance, you need to guarantee that memory allocation is lock free in a multi threaded environment.

Tobias
locking is a good reason, thanks!
MadH
A: 

As many have already stated this is usually done in performance critical applications, or to be able to control memory alignment or track your memory. Games frequently use custom memory managers, especially when targeting specific platforms/consoles.

Here is a pretty good blog post about one way of doing this and some reasoning.

Runeborg