Dynamic memory allocation after initialization. The memory pool should remain static after the system is up and running.
Trying to develop without access to the actual hardware you're developing for.
An important thing in embedded systems is to evaluate the technology, both software (compiler, libraries, os) and hardware (chipsets) independently from your application. Avoiding using test beds for these is dangerous. One should either buy evaluation kits or build his/her own test beds.
Assume endianess will be the same forever.
(Extend it to the size of the registers and anything about hardware specifications)
(Case explanation in the comments).
- Skimping on the logging facility. Embedded systems are hard to debug and you need lots of logging.
- Not having the ability to allow levels of logging. One system out of many will exhibit strange behaviours and you need to set the debug level of that system's logging to a more verbose one.
- Not allowing some kind of output port to allow logging to a e.g. console.
- Not having the ability to "step through" the code.
- Not having the ability to profile the code so you can see which bits needs to be optimised e.g. in assembler.
- Not developing some kind of "sanity test" so you can quickly check a device works once loaded and before shipping.
- Basing the design on some "home grown" OS
Without defining 'embedded programming' a bit more, then it's impossible to say what's good or bad practice.
Many of the the techniques you might use to program an 8-bit micro in a dodgy non-standard dialect of 'C' would be completely inappropriate on a CE or XPe platform, for example.
Abstraction is an (over-)expensive luxury in many cases, so 'avoiding it' might be good rather than bad.
Here are a few:
Don't design an easily explainable architecture that both your developers, managers and customers can understand.
An embedded system is almost always a cost sensitive platform. Don't plan on the HW getting slower (cheaper) and don't plan for new features in the critical data path.
Most embedded systems are "headless" (no keyboard or mouse or any other HID). Don't plan in your schedule to write debugging tools. And don't resource at least one developer to maintain them.
Be sure to underestimate how long it will take to to get the prompt. That is how long it takes to get the core CPU to a point where it can talk to you and you to it.
Always assume HW subsystems work out-of-the-box, like memory, clocks and power.
- Uninitialized exception vectors (you know, for the ones that "will never be reached")
- Say it with me: Global variables. Especially ones shared between ISRs and tasks (or foreground loops) without protection.
- Failure to use "volatile" where necessary.
- Having routines that DisableInterrupts() and then EnableInterrupts() paired up. Got that? Not RestoreInterrupts(), but ENABLE. Yeah, nesting.
- No GPIOs to toggle when testing.
- No testpoints on board.
- No LEDs or serial port for viewing run-time system status.
- No measurement of how busy/idle the CPU is.
- Use of inline assembly for all but the most dire of cases. Write a quick callout.
- Using for (i = 0; i < 1000; i++) { } to "delay a bit". Yeah, that's not gonna bite you in a hundred different ways....
- Not using const everywhere possible to preserve RAM and reduce boot time (no copying / init of variables)
I've got a ton more but that should get us started....
OK round 2.... just a few more:
Don't use a watchdog timer (esp. the built-in one!)
Use floating point types & routines when scaled integer math would suffice
Use an RTOS when it's not warranted
Don't use an RTOS when it would really make sense
Never look at the generated assembly code to understand what's going on under the hood
Write the firmware so that it can't be updated in the field
Don't document any assumptions you're making
If you see something strange while testing / debugging, just ignore it until it happens again; it probably wasn't anything important like a brownout, a missed interrupt, a sign of stack corruption, or some other fleeting & intermittent problem
When sizing stacks, the best philosophy is to "start small and keep increasing until the program stops crashing, then we're probably OK"
Don't take advantage of runtime profiling tools like Micrium's uC/Probe (I'm sure there are others)
Don't include Power-On Self Tests of the Hardware before running the main app - hey the boot code is running, what could possibly be not working?
Definitely don't include a RAM test in the POST (above) that you're not going to implement
If the target processor has an MMU, for all that is holy, don't use that scary MMU!!! Especially don't let it protect you from writes to code space, execution from data space, etc....
If you've been testing, debugging & integrating with a certain set of compiler options (e.g. no/low optimization), BE SURE TO TURN ON FULL OPTIMIZATION before your final release build!!! But only turn on optimization if you're not going to test. I mean, you've already tested for months - what could go wrong?!??!
Somebody stop me before I hurt myself.
BTW, I realize not all of these are strictly specific to embedded development, but I believe each of them is at least as important in the embedded world as the real world.
When making a schedule, go ahead & assume everything's going to work the first time.
Approach board bring-up without an oscilliscope and/or logic analyzer. Esp. the scope, that's never useful.
Don't consider the power supply during design. Issues like heat, efficiency, effects of ripple on ADC readings & system behavior, EMF radiation, start up time, etc.. aren't important.
Whatever you do, don't use a reset controller (the 5 cent IC type), just use an RC circuit (hopefully one with lots of high frequency AC noise coupled into it)
EMBRACE THE BIG BANG!!! Don't develop little pieces incrementally & integrate often, silly fool!!! Just code away for months, along side co-workers, and then slap it all together the night before the big tradeshow demo!
Don't instrument code with debugging / trace statements. Visibility is bad.
Do lots of stuff in your ISRs. Bubble sorts, database queries, etc... Hey, chances are no one's gonna interrupt you, you have the floor, enjoy it buddy!!!
Ignore board layout in a design. Let the autorouter go to town on those matched impedance traces and that high-current, high-frequency power supply. Hey, you have more important things to worry about, partner!!!
Use brand new, beta, unreleased, early adopter silicon, especially if it's safety critical (aviation, medical) or high-volume (it's fun to recall 1 million units). why go to vegas when there's new silicon sampling on that 4-core, 300 MHz 7-stage pipeline chip?
Write your FW module to be totally generic accepting every possible parameter as a variable even though the layer above you will always call with the same parameters.
Use memcpy everywhere in the code even though you have a DMA engine in the system (why bother the HW).
Design a complex layered FW architecture and then have a module access directly to global variables owned by higher level modules.
Choose a RTOS but don't bother to test its actual performance (can't we trust the numbers given by the vendor?)
Use multiple processors in your solution and make sure they have opposite endianness. Then make sure that the interface between them is one of them having direct access to the other's memory.
Yes, I've programmed that architecture before.
Printf.
If your tracing facility requires a context switch and/or interrupts, you'll never be able to debug anything even vaguely timing related.
Write to a memory buffer (bonus points for memcpy'ing enums instead of s(n)printf), and read it at another time.
This is perhaps more of a hardware answer -- but for starting new projects from scratch, underestimating the resource requirement is a big problem, especially when working on small self-contained microcontrollers with no easy way to expand code/storage size.
Don't:
Leave unused interrupt vectors which point nowhere (after all, they're never going to be triggered, so where's the harm in that...), rather than having them jump to a default unused interrupt handler which does something useful.
Be unfamiliar with the specifics of the processor you're using, especially if you're writing any low-level drivers.
Pick the version of a family of processors with the smallest amount of flash, on the grounds that you can always "upgrade later", unless costs make this unavoidable.
That's not just for embedded systems, but spending all this time finding bugs (debugging) instead of avoiding bugs with cool stuff like like e.g. code reviews is definitely one commonly applied worst practice.
Another one is letting one huge processor do all the work instead of breaking the problem into small problems e.g. with more little processors. Remember COCOMO?
It depends a lot on the type of controller you are programming for. Sometimes cost is the most important thing and you are trying to get by with as little as possible. That's the boat I'm usually in. Here are some worst practices I've used:
- Don't focus on improving your processes. Just try a little harder next time. Later when we aren't busy releasing new products hastily while supporting all these bugs in the field, we can worry about that stuff.
- Avoid designing an engineering tool to make your life easier and if you do build one, don't enable it to send invalid inputs to the device
- Don't question optimization. It's magic. The compiler knows what it's doing. There will never be a compiler bug, especially not for your customer 7-bit PIC submicrocontroller. Too many people would notice right?
- Divide and multiply like you are running a physics engine, don't worry about overflow, loss of precision, rounding down to zero.
- If your timing seems to work, don't check if you are off by 1 or if you drift over time. You played percussion in high school, you would notice if the difference between 7200000 clock cycles and 7200001.
- Rely on system level testing from a group that doesn't know anything about your firmware
- Work on as many different devices as possible. Have several debugger sessions going with different development environments. Work on developing one product while bench testing another and trying to reproduce a field issue on the third.
- Release a new version of code in a hurry because you only changed one thing and you probably didn't break it. The production line is down, we can't waste any time!
- Don't have any sort of test to warn you if optimization has been turned off. It probably won't be right? The new IDE version you just installed couldn't possibly have broken that setting.
- Write the code just well enough to work. Spend 75% of the time getting it halfway there.
- Don't have any input into the design of the features. Allow any feature to gather days of state information. Have no way of injecting this state information for a test. This will give you free time when trying to reproduce bugs people have seen in the field and the production guys will appreciate their time off as well