views:

157

answers:

8

So IN an embedded systems unit, that i'm taking at uni next year, we will learn that dynamic data structures are a bad thing to have in an embedded system program. but the lecture notes don't go into why.

Now i'm working on a moderate scale, embedded systems\ 'LURC' controller, mostly just takes advantages of the peripheral of the "Butterfly" demo board for the AVR169MEGA. produced 4 PWM signals to contol servo's and ESC. and also to provide an 9 seg LCD screan.

Now I can't think of anybetter way to store instructions as they are recieved vial USART serial, than a queue. esp for things where I'll need to wait until an unknown amount of data has been recieved: eg a string to display on the LCD screen.

so why don't you uses Dynamic data structures on a embedded systems microcontroller? Is it just that you're on a heavily memory restricted enviroment, and have to be sure your mallocs are succeeding?

A: 

I would say both the lack of memory and the problem of a failed malloc. The latter is more of an issue since you do not have an OS / interface to rescue the system from such a failure. It is very dangerous to use a function that can put your complete system that may be running headless into a screeching halt (Or perhaps cause a reset, still bad).

CookieOfFortune
+1  A: 

Well, many smaller microcontrollers don't have anything like an MMU, or an OS with a nice heap for you to work with.

For those that do, as long as you keep a sane bound on the amount of memory you are asking for, I don't really see a huge problem with it.

However, many embedded systems are also Real Time systems. If your application has hard deadlines for how long it can take to run, you will have trouble with dynamic allocations. Most heap implementations use algorithims that don't have a very well-bounded runtime. In some (perhaps rare) instances, they will take waaaay longer to run than normal. There are some real-time heap implementations, but they aren't in very wide use. The general rule is to avoid any dynamic allocation or deallocation in a hard real-time system after initialization.

T.E.D.
A: 

My impression is that on an embedded system, I exactly know how much memory is available, and I'm allowed to use exactly 100% of it; there is no need to leave a bit for other (concurrently running) programs, but there is also no virtual memory available to give me the 101st percent. So for a queue, I can easily calulate that I have space for (say) 981 records; so I create an array for those records and if I ever need a 982th record, I'm fscked up and must find a way to fail gracefully.

ammoQ
+3  A: 

There are a number of reasons not to use malloc (or equivalent) in an embedded system.

  • As you mentioned, it is important that you do not suddenly find yourself out of memory.
  • Fragmentation - embedded systems can run for years which can cause a severe waste of memory due to fragmentation.
  • Not really required. Dynamic memory allocation allows you to reuse the same memory to do different things at different times. Embedded systems tend to do the same thing all the time (except at startup).
  • Speed. Dynamic memory allocation is either relatively slow (and gets slower as the memory gets fragmented) or is fairly wasteful (e.g. buddy system).
  • If you are going to use the same dynamic memory for different threads and interrupts then allocation/freeing routines need to perform locking which can cause problems servicing interrupts fast enough.
  • Dynamic memory allocation makes it very difficult to debug, especially with some of the limited/primitive debug tools available on embedded system. If you statically allocate stuff then you know where stuff is all the time which means it is much easier to inspect the state of something.

Best of all - if you do not dynamically allocate memory then you can't get memory leaks.

Dipstick
While i am accepting this as the answer, it's still imcomplete.as it misses one of the big reasons. (however coivers most others)Dynamically allocated memory can very easisly overwrite the cpu stack.
Oxinabox
A: 

Dynamic data structures on embedded systems are a bit like pointers in C++. Pointers (in C++) are evil. But sometimes they're the only option; sometimes they're the lesser evil; and sometimes it's OK to avoid them entirely. In cases where there is a good reason to use them, there can be "good" ways and "bad" ways to do this.

Statically allocated variables and arrays are much faster to allocate and deallocate, and can be faster to access, than dynamically allocated data. See this answer.

Dynamically allocated (by which I mean malloc()ed or similar) data also requires space overheads to keep track of the allocations. Several bytes per allocation at least - this space can be very valuable on embedded systems!

Memory leaks are a massive problem on embedded systems, which can sometimes be expected to run for years. Avoiding dynamic allocation is prudent from this perspective.

Embedded devices usually have fairly dependable specifications. You know what the transfer rate is, you know how fast you can deal with information, and so on. In your example, the solution is to use a fixed-size buffer as a circular queue. Make the buffer large enough to handle what your device needs to be capable of handling (and perhaps a tiny bit more). If too much data arrives, it's probably due to a fault or interference somewhere else anyway so there's not much point holding and trying to use all that data.

Artelius
A: 

It depends as the meaning of "embedded" in my view broadened in the last 4 years.

Traditionally, embedded devices had microcontrollers on them and generally no operating system. No protected memory, and were single threaded. You would have to be extremely careful with malloced memory because it's so easy to run out of it when you've only got 32KB of it available for example. So generally, we'd write our code with fixed sized buffers and never use malloc or at if it was every used - very sparingly.

In the last few years we are seeing what are essentially single chip pc's or micro boards that are easily as powerful as our old Pentium PC's. RAM prices are so cheap now and so small that the memory limitations are nothing like they were. They also often run embedded linux or wince so now we have the ability to use dynamic memory more liberally.

With this is the ability to use a much wider range of languages including Java, C++, many scripting languages and other languages that provide buffer overrun protection and exception handing and other higher level languages. So really, those old problems are not like they used to be.

I suspect all this new available hardware comes a new range of issues.

Matt H
32 KB RAM! you were lucky. I have 1 Kb of RAM (all 26 of us) wich i have to share tit he the CPU stack
Oxinabox
A: 

I don't know about the Atmel MEGA169, but the MEGA168, which I suppose is related to the 169, has only 1024 bytes of SRAM. It also has only 16k of program ROM, and is relatively slow compared to modern computers. So it is limited in memory, program size and speed.

In my experience with AVR assembler programming, it takes effort to cram as much functionality into the PIC as possible. The amount of overhead needed to use dynamic data structures (extra memory use, extra instructions needed to pull and push the data from SRAM, keep track of which dynamic variable resides where, moving memory blocks around when variables 'in between' get deleted....) just doesn't justify the merits.

So even if the compiler implements it I'd stick with static data structures for performance.

littlegreen
PIC? OP is asking about AVRs.
iter
I was also talking about AVRs. My bad.
littlegreen
A: 

There is nothing wrong with dynamic memory in an embedded environment per se, though generally it doesn't buy you much in an embedded environment.

In my opinion its a very good idea to use ring-buffers (this is a very commmon data structure for I/O drivers etc). That way if for some reason you are unable to service your queue, memory usage is still deterministic.

Using some macros its possible to allocate variable size structures at compile time.

For instance -

    //we exploit the fact that C doesn't check array indices to allow dynamic alloc of this struct
    typedef struct ring_buf_t {
        int element_sz,
            buffer_sz,
            head,
            tail;
        char data[0];
    } ring_buf_t;

   #define RING_BUF_ALLOC_SZ(element_sz,n_elements) (sizeof (ring_buf_t) + \
                                                      (element_sz) * (n_elements))

    char backing_buf[RING_BUF_ALLOC_SZ (sizeof(type_to_buffer), 16)];

    //ring_buf_init() casts backing buf ring_buf_t and initialises members...
    ring_buf_t *ring_buffer = ring_buf_init (element_sz, n_elemements, backing_buf);

;

This pattern a dynamically sizeable buffer with guarenteed memory usage. Of course other kinds data structures (lists, queues etc) can be implemented in the same way.

Hayman