views:

725

answers:

2

In the past, when I've worked on long-running C++ daemons I've had to deal with heap fragmentation issues. Tricks like keeping a pool of my large allocations were necessary to keep from running out of contiguous heap space.

Is this still an issue with a 64 bit address space? Perf is not a concern for me, so I would prefer to simplify my code and not deal with things like buffer pools anymore. Does anyone have any experience or stories about this issue? I'm using Linux, but I imagine many of the same issues apply to Windows.

Thanks,

A: 

If your process genuinely needs gigabytes of virtual address space, then upgrading to 64-bit really does instantly remove the need for workarounds.

But it's worth working out how much memory you expect your process to be using. If it's only in the region of a gigabyte or less, there's no way even crazy fragmentation would make you run out of 32-bit address space - memory leaks might be the problem.

(Windows is more restrictive, by the way, since it reserves an impolite amount of address space in each process for the OS).

James Hopkin
+2  A: 

Heap fragmentation is just as much of an issue under 64 bit as under 32 bit. If you make lots of requests with varying lifetimes, then you are going to get a fragmented heap. Unfortunately, 64 bit operating systems don't really help with this, as they still can't really shuffle the small bits of free memory around to make larger contiguous blocks.

If you want to deal with heap fragmentation, you still have to use the same old tricks.

The only way that a 64 bit OS could help here is if there is some amount of memory that is 'large enough' that you would never fragment it.

Michael Kohne
Well, if it takes a week to fragment my 32bit space, I'd say that a 64bit space is "large enough" that I'll never fragment it. Assuming that a 64bit OS actually uses the full virtual space. So I guess as usual, the answer is "it depends on your OS and your app"...
Steve Jessop