views:

676

answers:

9

I am running my c++ application on an intel Xscale device. The problem is, when I run my application offtarget (Ubuntu) with Valgrind, it does not show any memory leaks.

But when I run it on the target system, it starts with 50K free memory, and reduces to 2K overnight. How to catch this kind of leakage, which is not being shown by Valgrind?

+3  A: 

This may be not a leak, but just the runtime heap not releasing memory to the operating system. This can also be fragmentation.

Possible ways to overcome this:

  1. Split into two applications. The master application will have the simple logic with little or no dynamic memory usage. It will start the worker application to actually do work in such chunks that the worker application will not run out of memory and will restart that application periodically. This way memory is periodically returned to the operating system.

  2. Write your own memory allocator. For example you can allocate a dedicated heap and only allocate memory from there, then free the dedicated heap entirely. This requires the operating system to support multiple heaps.

Also note that it's possible that your program runs differently on Ubuntu and on the target system and therefore different execution paths are taken and the code resulting in memory leaks is executed on the target system, but not on Ubuntu.

sharptooth
But,what happens, the Xscale device,at one point of time,goes out of memory and then the application crashes. So,it is imperative to stop the continous intake of memory. Can you please suggest some way?
Ajay
+3  A: 

It might not be an actual memory leak, but maybe a situation of increasing memory usage. For example it could be allocating a continually increasing string:

string s;
for (i=0; i<n; i++)
  s += "a";

50k isn't that much, maybe you should go over your source by hand and see what might be causing the issue.

1800 INFORMATION
Well,actually it starts with 50K memory free and ends up with 2K memory free. The code is of roughly around 1,00,000 lines.
Ajay
Well that's only 48kB leaked, sorry to say if it is not a real leak you may have to search the code for likely culprits...how about logging memory allocations?
1800 INFORMATION
If valgrind doesn't show leaks, this is the most likely scenario. That's not to say it's the only one, but it's a situation valgrind can't detect. Best thing you can do is write core dumps to disk at intervals, and compare them. If you find some buffer that's larger in each successive dump, examine the code around that. It's tedious, granted...
A: 

If your memory usage goes down, i don't think it can be defined as a memory leak. Where are you getting reports of memory usage ? The system might just have put most of your program's memory use in virtual memory.

All i can add is that Valgrind is known to be pretty efficient at finding memory leaks !

Benoît
A: 

Also, are you sure when you profiled your code, the code-coverage was enough to cover all the code-paths which might be executed on target platform?

Valgrind for sure does not lie. As has been pointed out, this might indeed be the runtime heap not releasing the memory, but i would think otherwise.

Abhay
A: 

Are you using any sophisticated technique to track the scope of object..? if yes, than valgrind is not smart enough, Though you can try by setting xscale related option with valgrind

i am unaware of any xscale related options in valgrind. How to do that?
Ajay
A: 

Most applications show a pattern of memory use like this:

  • they use very little when they start
  • as they create data structures they use more and more
  • as they start deleting old data structures or reusing existing ones, they reach a steady state where memory use stays roughly constant

If your app is continuosly increasing in size, you may have aleak. If it increases in sizze over aperiod and then reaches arelatively steady state, you probably don't.

anon
+10  A: 

A common culprit with these small embedded deviecs is memory fragmentation. You might have free memory in your application between 2 objects. A common solution to this is the use of a dedicated allocator (operator new in C++) for the most common classes. Memory pools used purely for objects of size N don't fragment - the space between two objects will always be a multiple of N.

MSalters
+1  A: 

This does sounds like fragmentation. Fragmentation is caused by you allocating objects on the stack, say:

object1
object2
object3
object4

And then deleting some objects

object1

object3
object4

You now have a hole in the memory that is unused. If you allocate another object that's too big for the hole, the hole will remain wasted. Eventually with enough memory churn, you can end up with so many holes that they waste you memory.

The way around this is to try and decide your memory requirements up front. If you've got particular objects that you know you are creating many of, try and ensure they're the same size.

You can use a pool to make the allocations more efficient for a particular class... or at least let you track it better so you can understand what's going on and come up with a good solution.

One way of doing this is to create a single static:

struct Slot
{
    Slot() : free(true) {}
    bool free;
    BYTE data[20];  // you'll need to tune the value 20 to what your program needs
};
Slot pool[500]; // you'll need to pick a good pool size too.

Create the pool up front when your program starts and pre-allocate it so that it is as big as the maximum requirements for your program. You may want to HeapAlloc it (or the equivalent in your OS so that you can control when it appears from somewhere in you application startup).

Then override the new and delete operators for a suspect class so that they return slots from this vector. So, your objects will be stored in this vector.

You can override new and delete for classes of the same size to be put in this vector.

Create pools of different sizes for different objects.

Just go for the worst offenders at first.

I've done something like this before and it solved my problem on an embedded device. I also was using a lot of STL, so I created a custom allocator (google for stl custom allocator - there are loads of links). This was useful for records stored in a mini-database my program used.

Scott Langham
A: 

You can use the massif tool from Valgrind, which will show you where the most memory is allocated and how it evolves over time.

jpalecek