views:

969

answers:

5

I have the need to allocate large blocks of memory with new.

I am stuck with using new because I am writing a mock for the producer side of a two part application. The actual producer code is allocating these large blocks and my code has responsibility to delete them (after processing them).

Is there a way I can ensure my application is capable of allocating such a large amount of memory from the heap? Can I set the heap to a larger size?

My case is 64 blocks of 288000 bytes. Sometimes I am getting 12 to allocate and other times I am getting 27 to allocate. I am getting a std::bad_alloc exception.

This is: C++, GCC on Linux (32bit).

+4  A: 

It's possible that you are being limited by the process' ulimit; run ulimit -a and check the virutal memory and data seg size limits. Other than that, can you post your allocation code so we can see what's actually going on?

Kieron
JeffV
@Jeff: You're only trying to allocate about 18Mb! That's a pittance on any modern computer. There *must* be something else allocating tons of memory inside your program.
j_random_hacker
A: 

i would suggest allocating all your memory at program startup and using placement new to position your buffers. why this approach? well, you can manually keep track of fragmentation and such. there is no portable way of determining how much memory is able to be allocated for your process. i'm positive there's a linux specific system call that will get you that info (can't think of what it is). good luck.

+5  A: 

With respect to new in C++/GCC/Linux(32bit)...

It's been a while, and it's implementation dependent, but I believe new will, behind the scenes, invoke malloc(). Malloc(), unless you ask for something exceeding the address space of the process, or outside of specified (ulimit/getrusage) limits, won't fail. Even when your system doesn't have enough RAM+SWAP. For example: malloc(1gig) on a system with 256Meg of RAM + 0 SWAP will, I believe, succeed.

However, when you go use that memory, the kernel supplies the pages through a lazy-allocation mechanism. At that point, when you first read or write to that memory, if the kernel cannot allocate memory pages to your process, it kills your process.

This can be a problem on a shared computer, when your colleague has a slow core leak. Especially when he starts knocking out system processes.

So the fact that you are seeing std::bad_alloc exceptions is "interesting".

Now new will run the constructor on the allocated memory, touching all those memory pages before it returns. Depending on implementation, it might be trapping the out-of-memory signal.

Have you tried this with plain o'l malloc?

Have you tried running the "free" program? Do you have enough memory available?

As others have suggested, have you checked limit/ulimit/getrusage() for hard & soft constraints?

What does your code look like, exactly? I'm guessing new ClassFoo [ N ]. Or perhaps new char [ N ].

What is sizeof(ClassFoo)? What is N?

Allocating 64*288000 (17.58Meg) should be trivial for most modern machines... Are you running on an embedded system or something otherwise special?

Alternatively, are you linking with a custom new allocator? Does your class have its own new allocator?

Does your data structure (class) allocate other objects as part of its constructor?

Has someone tampered with your libraries? Do you have multiple compilers installed? Are you using the wrong include or library paths?

Are you linking against stale object files? Do you simply need to recompile your all your source files?

Can you create a trivial test program? Just a couple lines of code that reproduces the bug? Or is your problem elsewhere, and only showing up here?

--

For what it's worth, I've allocated over 2gig data blocks with new in 32bit linux under g++. Your problem lies elsewhere.

Mr.Ree
+1 for the thorough breakdown!
JeffV
A: 

The fact that you're getting different behavior when you run the program at different times makes me think that the allocation code isn't the real problem. Instead, somebody else is using the memory and you're the canary finding out it's missing.

If that "somebody else" is in your program, you should be able to find it by using Valgrind.

If that somebody else is another program, you should be able to determine that by going to a different runlevel (although you won't necessarily know the culprit).

Max Lybbert
A: 

Update:

I have since fixed an array indexing bug and it is allocating properly now.

If I had to guess... I was walking all over my heap and was messing with the malloc's data structures. (??)

JeffV
That would certainly do it! Sometimes I use INLINED accessor methods with assert statements to pick up on such problems during the debugging phase. (They get compiled out for production code.)
Mr.Ree