views:

65

answers:

2

When I run Valgrind on my code, I get several thousand instances of

12 bytes in 1 blocks are possibly lost in loss record 545 of 29,459
   at 0x7FCC050: operator new(unsigned int) (vg_replace_malloc.c:214)
   by 0x87E39B1: __gnu_cxx::new_allocator<T>::allocate(unsigned int, void const*) (new_allocator.h:89)
   ...
   ...

From various posts I was able to determine that this is "not a bug, but a feature" since it is the way that gnu libraries provide highly efficient allocation to the stl. That said, seeing several thousand of these make it hard to find true bugs.

How can I set up Valgrind to not show these errors?

Note: I have tried setting environmental variables GLIBCXX_FORCE_NEW G_SLICE=always-malloc G_DEBUG=gc-friendly,resident-modules and nothing changed

+1  A: 

Use valgrind --gen-suppressions=yes to generate suppression statements for the errors it displays. You can then re-run valgrind with these error messages suppressed using --suppressions=<filename>.

John Kugelman
That was my now-deleted answer too :) But what there remains of valgrind if you suppress operator new?
Laurynas Biveinis
A: 

Have you tried the suggestions in the Valgrind FAQ for disabling the alloc pool? Note that the environment variable changes depending on the version of gcc used.

Devon_C_Miller
If you are referring to GLIBCXX_FORCE_NEW, it doesn't help. Are you referring to another way of disabling alloc pool?
Yes, however, upon further investigation, it looks like there's a gcc bug (http://gcc.gnu.org/bugzilla/show_bug.cgi?id=31777). There exists a potential race condition where the variable being used to force the use of new can exceed 1 and the test in pool_allocator.h is for == 1. Unfortunately, it does not appear that the fix described in that bug has been included in any gcc release.
Devon_C_Miller