views:

162

answers:

7

How should I manage memory in my mission critical embedded application?

I found some articles with google, but couldn't pinpoint a really useful practical guide.

The DO-178b forbids dynamic memory allocations, but how will you manage the memory then? Preallocate everything in advance and send a pointer to each function that needs allocation? Allocate it on the stack? Use a global static allocator (but then it's very similar to dynamic allocation)?

Answers can be of the form of regular answer, reference to a resource, or reference to good opensource embedded system for example.

clarification: The issue here is not whether or not memory management is availible for the embedded system. But what is a good design for an embedded system, to maximize reliability.

I don't understand why statically preallocating a buffer pool, and dynamically getting and dropping it, is different from dynamically allocating memory.

A: 

Allocating everything from stack is commonly done in embedded systems or elsewhere where the possibility of an allocation failing is unacceptable. I don't know what DO-178b is, but if the problem is that malloc is not available on your platform, you can also implement it yourself (implementing your own heap), but this still may lead to an allocation failing when you run out of space, of course.

Tronic
The DO-178b is a standard for avionic software. The problem is not the availiblity of malloc, but a good mission critical software design.
Elazar Leibovich
+1  A: 

Real-time, long running, mission critical systems should not dynamically allocate and free memory from heap. If you need and cannot design around it to then write your own allocated and fixed pool management scheme. Yes, allocated fixed ahead of time whenever possible. Anything else is asking for eventual trouble.

kenny
A: 

There's no way to be 100% sure.

You may look at FreeRTOS' memory allocators examples. Those use static pool, if i'm not mistaken.

Roman D
But is dynamic allocated memory using static pools acceptable in mission critical applications?
Elazar Leibovich
Yes and no. Just like in case of any kind of dynamic allocation, there's a chance of running out of pool. You need to ask yourself if you may tolerate that. Apart from that, there comes a bunch of problems related to the custom allocator implementation (fragmentation, optimization, yada yada)
Roman D
A: 

You might find this question interesting as well, dynamic allocation is often prohibited in space hardened settings (actually, core memory is still useful there).

Typically, when malloc() is not available, I just use the stack. As Tronic said, the whole reason behind not using malloc() is that it can fail. If you are using a global static pool, it is conceivable that your internal malloc() implementation could be made fail proof.

It really, really, really depends on the task at hand and what the board is going to be exposed to.

Tim Post
I really really really don't understand how your internally implemented malloc can be made failproof. If you're really trying to compute something (say, number the path your robot should take to reach its destination) and you happen to have a too long input, you might run of space to store all the steps the robot needs to take. You have to take care of that, whether you're using dynamic or static memory allocation.
Elazar Leibovich
@Elazar Leibovich: If you have a statically allocated pool, you _know_ you have the memory to accomplish the task, given the design limitations of whatever you are working on. A robot that had to cross a continent would suggest an entirely different hardware configuration than one that had to walk from one room to the next. Additionally, I probably would not implement an internal malloc() on a rad hard board. Your question is good, but rather general as these problems tend to be extremely task specific.
Tim Post
@Tim Post, Fair enough, you must have enough memory to perform the specific task your embedded hardware is required to do. But sometimes you find yourself writing general pieces of code, which might be relevant for other embedded project. For example, you're implementing Dijkstra. Your general purpose Dijkstra might need to allocate memory, and should fail gracefully if it don't have enough memory. Of course the system would never fail, as it wouldn't use Dijkstra if it don't have enough memory for it, but Dijkstra itself might fail.
Elazar Leibovich
+1  A: 

Disclaimer: I've not worked specifically with DO-178b, but I have written software for certified systems.

On the certified systems for which I have been a developer, ...

  1. Dynamic memory allocation was acceptable ONLY during the initialization phase.
  2. Dynamic memory de-allocation was NEVER acceptable.

This left us with the following options ...

  • Use statically allocated structures.
  • Create a pool of structures and then get/release them from/back to the pool.
  • For flexibility, we could dynamically allocate the size of the pools or number of structures during the initialization phase. However, once past that init phase, we were stuck with what we had.

Our company found that pools of structures and then get/releasing from/back into the pool was most useful. We were able to keep to the model, and keep things deterministic with minimal problems.

Hope that helps.

Sparky
But isn't get/release memory pools equivalent to implementing dynamic allocation yourself?
Elazar Leibovich
No. They are similar but not equivalent. Dynamic allocation using malloc() and free() leads to memory fragmentation, and the possibility of malloc() failing due to fragmentation. Certified systems thus avoid it because it is a royal PITA to certify.Items in the pool may be scattered, but the item get routine is guaranteed to succeed provided there are unused items. It does not matter how scattered the items are in the pool.
Sparky
I'm by no means an OS expert, so correct me if I'm wrong, but, assuming there's only one thread using malloc() and free(), won't there be no fragmentation if you're indeed free'ing everything you malloc'd? And in case you don't, you'l of course run into troubles anyhow... Anyhow, the fact two threads use the same pool is indeed a big problem with traditional memory management.
Elazar Leibovich
The order in which memory is freed, and which blocks of allocated memory have been freed will affect the degree of fragmentation. If you malloc() 1 kB, 2kB, and 4kB, these blocks are not guaranteed to be contiguous. If they happened to be, freeing the 2kB, but not the others would introduce some degree of memory fragmentation. Remember, not all allocated structures persist for the same length of time--some are transient.
Sparky
Here's my point. In a reactive embedded system, memory fragmentation is possible only if you haven't free'd all your malloc'd data at the end of the main system loop. If make sure you free every malloc before the end of the main loop - you're OK, whatever memory management method you choose. Agreed?
Elazar Leibovich
+2  A: 

As someone who has dealt with embedded systems, though not to such rigor so far (I have read DO-178B, though):

  • If you look at the u-boot bootloader, a lot is done with a globally placed structure. Depending on your exact application, you may be able to get away with a global structure and stack. Of course, there are re-entrancy and related issues there that don't really apply to a bootloader but might for you.
  • Preallocate, preallocate, preallocate. If you can at design-time bind the size of an array/list structure/etc, declare it as a global (or static global -- look Ma, encapsulation).
  • The stack is very useful, use it where needed -- but be careful, as it can be easy to keep allocating off of it until you have no stack space left. Some code I once found myself debugging would allocate 1k buffers for string management in multiple functions...occasionally, the usage of the buffers would hit another program's stack space, as the default stack size was 4k.
  • The buffer pool case may depend on exactly how it's implemented. If you know you need to pass around fixed-size buffers of a size known at compile time, dealing with a buffer pool is likely more easy to demonstrate correctness than a complete dynamic allocator. You just need to verify buffers cannot be lost, and validate your handling won't fail. There seem to be some good tips here: http://www.cotsjournalonline.com/articles/view/101217

Really, though, I think your answers might be found in joining http://www.do178site.com/

Arthur Shipkowski
+2  A: 

I've worked in a DO-178B environment (systems for airplanes). What I have understood, is that the main reason for not allowing dynamic allocation is mainly certification. Certification is done through tests (unitary, coverage, integration, ...). With those tests you have to prove that you the behavior of your program is 100% predictable, nearly to the point that the memory footprint of your process is the same from one execution to the next. As dynamic allocation is done on the heap (and can fail) you can not easily prove that (I imagine it should be possible if you master all the tools from the hardware to any piece of code written, but ...). You have not this problem with static allocation. That also why C++ was not used at this time in such environments. (it was about 15 years ago, that might have changed ...)

Practically, you have to write a lot of struct pools and allocation functions that guarantee that you have something deterministic. You can imagine a lot of solutions. The key is that you have to prove (with TONS of tests) a high level of deterministic behavior. It's easier to prove that your hand crafted developpement work deterministically that to prove that linux + gcc is deterministic in allocating memory.

Just my 2 cents. It was a long time ago, things might have changed, but concerning certification like DO-178B, the point is to prove your app will work the same any time in any context.

neuro