tags:

views:

240

answers:

2

I'm trying to take a snapshot of the memory used by a large application running on both Unix / Windows. My ultimate aim would be to have a kind of chart breaking down the memory used by which area of code.

The program is split into about 30 different projects most of which are either static libraries or dynamic dlls. Some of these are written in C, some C++ and others a mixture of the two. In total the code across all projects is about 600,000 lines.

With the heap I could try and overload every 'malloc/free' and 'new/delete' across all projects and track it that way, but that is quite daunting with an application this size.

Also, that wouldn't pick up all the static global data littered around the projects too.

Thanks for any help.

A: 

If you are working with ELF binaries you could check the object files "*.o" before linking with an elf analyzer and see how big are the static memory sections and the size the bss(non-initialized static data) will have once loaded.

Arkaitz Jimenez
+5  A: 

You could give valgrind a try. Here is a quote about one of the tools:

Massif

Massif is a heap profiler. It performs detailed heap profiling by taking regular snapshots of a program's heap. It produces a graph showing heap usage over time, including information about which parts of the program are responsible for the most memory allocations. The graph is supplemented by a text or HTML file that includes more information for determining where the most memory is being allocated. Massif runs programs about 20x slower than normal.

It supports on Linux now, but if doing the analysis on Linux and applying the results to the Windows version works for you this might help you.

KIV