views:

336

answers:

5

I'm developing a VC++ NT service that is meant to operate continuously for many months. It uses VC++ runtime heap intensively. Clearly heap fragmentation can at some point cause it malfunctioning (thinking it's out of memory).

What tests can I run on my service to estimate the degree it is prone to heap fragmentation?

+2  A: 

I guess the best way would be writing your own memory manager (or buying one) that offers this data. Any other way would change the heap itself and thus invalidating the result.

A strategy that is easier to implement is to allocate memory blocks of different sizes and wait for a failure - but I don't think that's a good way. Anyway - the larger the blocksize was, that did not fail, the less the fragmentation. But depending on the memory manager, allocating the block can change the result.


Edit: I found a link about the slab allocator (thx for the comment) showing the statistics. It's in German though and the english version of the article does not contain that much information. Use babelfish for translation.

http://de.wikipedia.org/wiki/Slab_allocator (babelfish version)

http://www.usenix.org/event/usenix01/full_papers/bonwick/bonwick.pdf

Tobias Langner
+1 slab allocator.
sixlettervariables
A: 

I agree with Tobias - making your own memory manager is an excellent way to do this. I know only a few developers I would trust to write this kind of code though...

Another possibility is to do your own sort of garbage collection/consolidation on your objects every now and then - at low loads... i.e. your service can be inactive for a while while it "defragments" the memory it uses but I am not sure you can guarantee the behavior you want without your own memory management.

Tim
+1  A: 

You've gotten a couple of answers that talk about how to prevent problems for heap fragmentation, but neither really addressed your question directly. Nearly the only way to estimate how likely it is to suffer problems for fragmentation is to simulate lots of use, and measure the fragmentation you get.

Since it's an NT service, simulating months of use mostly consists of making a lot of requests in a hurry. Chances are that you can make requests faster than it's normally expected to receive them, so you can probably simulate several months worth of requests in only a few hours, and quite possibly even less (depending on the rate at which you normally expect to receive requests).

Once you simulate months worth of work (or even as you're doing so) you then need to look at the heap to see how much fragmentation you're getting. This isn't easy, but it's normally possible. You'll start by injecting a thread into the service process (Googling on "thread injection" or something on that order should get a fair amount of info). Then you'll need to walk the heap, looking (in particular) for blocks that are free, but too small to be likely to satisfy most requests. Assuming you're using MS VC++, you walk the heap with _heapwalk, and it'll walk through the heap telling you the address, size, and status (free or in-use) of each block in the heap.

One last detail: for this to produce meaningful results, both the executable AND the DLL containing your injected thread have to be linked to the run-time library in a DLL. This means there will be one heap for the whole process, so your injected thread will walk the heap being used by your service. If you link the standard library statically, the DLL and the service will each have its own heap. The DLL will walk its own heap, which will tell you nothing about the heap being used by the service process.

Jerry Coffin
A: 

I'm sure there are tools for windows that can give you the status of a memory but nevertheless you should develop your service with this problem in mind.

First you should understand what are the allocations you preform. I think the simple way to do it is by overriding the new and delete operators, and from these new operators you should count some statistics of your allocations and then call the default new and delete operators of your compiler.

The minimum statistics you should count in my opinion are the Number of allocations of common block sizes ranges.

e.g. blocks between 0 bytes to 15 bytes, blocks between 16 bytes to 32 bytes, blocks between 32 bytes to 48 bytes, ...

You can add also the Number of sequential allocation of each blocks size range

After you collect this data you can reduce the fragmentation problem by aligning you blocks to common sizes.

The best and simple technique for alignment is to use a blocks that are power of 2.

for example to align a number to closest number that divide by 16, you can use the following function:

int align(int size)
{
    return ((size + 15) & ~0x0000000F);
}

Off course you should use your statistics to select the best power of 2 to align with. The target is to reach a number that most of your allocations will get into few blocks ranges and in the same time to keep the overhead of the alignment reasonable.

Good luck...

Barak C
A: 

Switchin on the Low fragmentation heap for Windows can help doing the job on older systems. on new systems its switched on default (Vista, Server 2008)

  HANDLE heaps[1025];
  DWORD nheaps = GetProcessHeaps((sizeof(heaps) / sizeof(HANDLE)) - 1, heaps);
  for (DWORD i = 0; i < nheaps; ++i) {
    ULONG  enableLFH = 2;
    HeapSetInformation(heaps[i], HeapCompatibilityInformation, &enableLFH, sizeof(enableLFH));
  }

There is a tool VMMap from sysinternals (now Microsoft) which gives a good overview on the memory fragmentation.

Totonga