tags:

views:

889

answers:

11

I'm experiencing what appears to be a stack/heap collision in an embedded environment (see this question for some background).

I'd like to try rewriting the code so that it doesn't allocate memory on the heap.

Can I write an application without using the heap in C? For example, how would I use the stack only if I have a need for dynamic memory allocation?

+1  A: 

You can't do dynamic memory allocation in C without using heap memory. It would be pretty hard to write a real world application without using Heap. At least, I can't think of a way to do this.

BTW, Why do you want to avoid heap? What's so wrong with it?

Aamir
I've added a link which gives some background
Matthew Murdoch
Actually, you can use the underlying OS's APIs to allocate system memory without using the runtime heap. You could, in Windows CE for example, allocate memory using memory mapped files, and you wouldn't need to call malloc() a single time.
Dave Van den Eynde
But that is platform-dependant. Right?
Aamir
@Dave The environment I'm working in has no OS (not even an RTOS)...
Matthew Murdoch
Ah, I know the feeling. Look into how you can configure the compiler's decisions about memory layout. Also, explain what exactly you are using malloc() for, perhaps we will be able to suggest some improvements to your memory usage strategy.
Artelius
@Artelius - see http://stackoverflow.com/questions/1027416/how-can-i-prevent-the-need-to-copy-strings-passed-to-a-c-constructor. Thanks.
Matthew Murdoch
+2  A: 

1: Yes you can - if you don't need dynamic memory allocation, but it could have a horrible performance, depending on your app. (i.e. not using the heap won't give you better apps)

2: No I don't think you can allocate memory dynamically on the stack, since that part is managed by the compiler.

soulmerge
That depends, some platforms have library calls like alloca(3) which allocate dynamic memory on the stack. Most compilers will also let you allocate variable sized arrays now which you can coerce into acting like dynamic void* memory pointers, although this certainly can't be recommended.
Jason Coco
But importantly, this memory is deallocated when the function ends!
Artelius
@Jason Coco/Artelius: Thx, didn't know that!
soulmerge
+7  A: 

Funnily enough, I once saw a database application which completly relied on static allocated memory. This application had a strong restriction on field and record lengths. Even the embedded text editor (I still shiver calling it that) was unable to create texts with more than 250 lines of text. That solved some question I had at this time: why are only 40 records allowed per client?

In serious applications you can not calculate in advance the memory requirements of your running system. Therefore it is a good idea to allocate memory dynamically as you need it. Nevertheless it is common case in embedded systems to preallocate memory you really need to prevent unexpected failures due to memory shortage.

You might allocate dynamic memory on the stack using the alloca() library calls. But this memory is tight to the execution context of the application and it is a bad idea to return memory of this type the caller, because it will be overwritten by later subroutine calls.

So I might answer your question with a crisp and clear "it depends"...

Ralf Edmund
+3  A: 

It's possible to allocate a large amount of memory from the stack in main() and have your code sub-allocate it later on. It's a silly thing to do since it means your program is taking up memory that it doesn't actually need.

I can think of no reason (save some kind of silly programming challenge or learning exercise) for wanting to avoid the heap. If you've "heard" that heap allocation is slow and stack allocation is fast, it's simply because the heap involves dynamic allocation. If you were to dynamically allocate memory from a reserved block within the stack, it would be just as slow.

Stack allocation is easy and fast because you may only deallocate the "youngest" item on the stack. It works for local variables. It doesn't work for dynamic data structures.

Edit: Having seen the motivation for the question...

Firstly, the heap and the stack have to compete for the same amount of available space. Generally, they grow towards each other. This means that if you move all your heap usage into the stack somehow, then rather than stack colliding with heap, the stack size will just exceed the amount of RAM you have available.

I think you just need to watch your heap and stack usage (you can grab pointers to local variables to get an idea of where the stack is at the moment) and if it's too high, reduce it. If you have lots of small dynamically-allocated objects, remember that each allocation has some memory overhead, so sub-allocating them from a pool can help cut down on memory requirements. If you use recursion anywhere think about replacing it with an array-based solution.

Artelius
I'm working in an embedded environment (see the linked question) and experiencing what appears to be a heap/stack collision. The issue is non-performance related.
Matthew Murdoch
Ok, sorry for jumping. You really should put more detail in your questions.
Artelius
No problem. +1 for your stack/heap space competition point.
Matthew Murdoch
@Artelius: it's very common for smaller embedded processors not to use the heap at all, and hence no dynamic memory allocation.
Steve Melnikoff
Well if you're in an OS-free or RTOS environment, you can use the free memory pretty well however you please - you can implement some sort of malloc() yourself if you have to.
Artelius
+2  A: 

Yes, it's doable. Shift your dynamic needs out of memory and onto disk (or whatever mass storage you have available) -- and suffer the consequent performance penalty.

E.g., You need to build and reference a binary tree of unknown size. Specify a record layout describing a node of the tree, where pointers to other nodes are actually record numbers in your tree file. Write routines that let you add to the tree by writing an additional record to file, and walk the tree by reading a record, finding its child as another record number, reading that record, etc.

This technique allocates space dynamically, but it's disk space, not RAM space. All the routines involved can be written using statically allocated space -- on the stack.

John Pirie
I'm working in an embedded environment so there's no disk or any other storage device for that matter... I could add one though so +1.
Matthew Murdoch
+20  A: 

I did it once in an embedded environment where we were writing "super safe" code for biomedical machines. Malloc()s were explicitly forbidden, partly for the resources limits and for the unexpected behavior you can get from dynamic memory (look for malloc(), VxWorks/Tornado and fragmentation and you'll have a good example).

Anyway, the solution was to plan in advance the needed resources and statically allocate the "dynamic" ones in a vector contained in a separate module, having some kind of special purpose allocator give and take back pointers. This approach avoided fragmentation issues altogether and helped getting finer grained error info, if a resource was exhausted.

This may sound silly on big iron, but on embedded systems, and particularly on safety critical ones, it's better to have a very good understanding of which -time and space- resources are needed beforehand, if only for the purpose of sizing the hardware.

Metiu
+1 Great answer. Thank you.
Matthew Murdoch
This is exactly what I've done on the safety critical / high availability (embedded) systems I've worked on. Buffer pools, static allocation, etc.
Dan
+1 Same here, I haven't used malloc in a while on an embedded system.
JeffV
+4  A: 

You can use alloca() function that allocates memory on the stack - this memory will be freed automatically when you exit the function. alloca() is GNU-specific, you use GCC so it must be available.

See man alloca.

Another option is to use variable-length arrays, but you need to use C99 mode.

qrdl
+2  A: 

Embedded applications need to be careful with memory allocations but I don't think using the stack or your own pre-allocated heap is the answer. If possible, allocate all required memory (usually buffers and large data structures) at initialization time from a heap. This requires a different style of program than most of us are used to now but it's the best way to get close to deterministic behavior.

A large heap that is sub-allocated later would still be subject to running out of memory and the only thing to do then is have a watchdog kick in (or similar action). Using the stack sounds appealing but if you're going to allocate large buffers/data structures on the stack you have to be sure that the stack is large enough to handle all possible code paths that your program could execute. This is not easy and in the end is similar to a sub-allocated heap.

dpp
Presumably when you say 'allocate all required memory ... at initialization time' you mean allocate on the heap?
Matthew Murdoch
No - not necessarily, or even usually. You pre-allocate arrays, elements of which are then used (allocated) as required. This can even lead to space saving because instead of storing 8-byte pointers, you can store 2-byte (or 4-byte) array indexes.
Jonathan Leffler
@Matthew Murdoch: yes from a heap -- edited answer.
dpp
@Jonathan Leffler: I would still worry about not knowing how much memory I'd require at runtime if I used that scheme. What's the difference between this and a heap?
dpp
@dp: you define the systems within the constraints imposed (e.g. 64 KB RAM - really!), and you recognize that you do not have virtual memory, and you decide how you will get rid of unwanted data when there is no space for the most recent new bit of data, and it is a very demanding discipline indeed. The difference between this and a heap is that parts of a heap can be reallocated to different purposes, but also a heap generally has house-keeping overhead that uses precious space.
Jonathan Leffler
@Jonathan Leffler: perhaps it's situational. In embedded systesm with very little memory I personally prefer to have everything allocated at startup so I can be sure that I wouldn't run out of memory during runtime. This way, a memory budget for the application is established and I can feel (relatively) safe that I would execute some path of code that allocates more memory than I thought was possible. I've done it the way you suggest and in those cases I've been sure to test, as much as possible, the runtime memory usage in as many scenarios as I could.
dpp
A: 

My foremost concern is, does abolishing the heap really helps?

Since your wish of not using heap stems from stack/heap collision, assuming the start of stack and start of heap are set properly (e.g. in the same setting, small sample programs have no such collision problem), then the collision means the hardware has not enough memory for your program.

Not using heap, one may indeed save some waste space from heap fragmentation; but if your program does not use the heap for a bunch of irregular large size allocation, the waste there are probably not much. I will see your collision problem more of an out of memory problem, something not fixable by merely avoiding heap.

My advices on tackling this case:

  1. Calculate the total potential memory usage of your program. If it is too close to but not yet exceeding the amount of memory you prepared for the hardware, then you may
  2. Try using less memory (improve the algorithms) or using the memory more efficiently (e.g. smaller and more-regular-sized malloc() to reduce heap fragmentation); or
  3. Simply buy more memory for the hardware

Of course you may try pushing everything into pre-defined static memory space, but it is very probable that it will be stack overwriting into static memory this time. So improve the algorithm to be less memory-consuming first and buy more memory the second.

billyswong
A: 

I'd attack this problem in a different way - if you think the the stack and heap are colliding, then test this by guarding against it.

For example (assuming a *ix system) try mprotect()ing the last stack page (assuming a fixed size stack) so it is not accessible. Or - if your stack grows - then mmap a page in the middle of the stack and heap. If you get a segv on your guard page you know you've run off the end of the stack or heap; and by looking at the address of the seg fault you can see which of the stack & heap collided.

Dave Rigby
A: 

It is often possible to write your embedded application without using dynamic memory allocation. In many embedded applications the use of dynamic allocation is deprecated because of the problems that can arise due to heap fragmentation. Over time it becomes highly likely that there will not be a suitably sized region of free heap space to allow the memory to be allocated and unless there is a scheme in place to handle this error the application will crash. There are various schemes to get around this, one being to always allocate fixed size objects on the heap so that a new allocation will always fit into a freed memory area. Another to detect the allocation failure and to perform a defragmentation process on all of the objects on the heap (left as an exercise for the reader!)

You do not say what processor or toolset you are using but in many the static, heap and stack are allocated to separate defined segments in the linker. If this is the case then it must be that your stack is growing outside the memory space that you have defined for it. The solution that you require is to reduce the heap and/or static variable size (assuming that these two are contiguous) so that there is more available for the stack. It may be possible to reduce the heap unilaterally although this can increase the probability of fragmentation problems. Ensuring that there are no unnecessary static variables will free some space at the cost of possibly increasing the stack usage if the variable is made auto.

Ian