tags:

views:

281

answers:

11

I am a C++ programmer and over the years have been subjected to hearing the notion that STL is not good for use in embedded enviornments and hence usually prohibited in usage for embedded enviornment based projects.I believe STL libraries like Boost are far more powerful and provide a much more faster & less error prone means of development(ofcourse the syntax is little intimidating but once past that i think it's a real treasure).Also, I find the claims that STL is heavy and increases final footprint of code absurd because since it is templatized one is only going to get compilable code which he asked for and not the entire STL.

My question is what are the reasons for this this populist(atleast most peeps around me think so) notion which calls STL is not for embedded enviornment?

I do see a question of similar nature but herein i am expecting help in pointing out the pros and cons in general about STL and embedded enviornment here.

Edit: so here I will add up the points as the replies come in:
1. Portability Issues
2. coping with huge dymanice allocations by STL containers
3. STL is hard to debug
4. Deep function calls in STL results in low performance for compilers weak with inlining (power of functors useless!)

+1  A: 

Many think that (for many reasons, such as portability) C++ isn't a good fit for an embedded environment. There are many types of embedded environments and STL is certainly OK for some of them.

In general, 'more powerful' is always a phrase to fear when you need to choose anything for a resource constrained environment, as you often want something less powerful and more controllable. Especially if 'more powerful' means the developer (or whoever maintains the code later) would have less understanding of the underlying implementation.

Ofir
+2  A: 

I think the choice depends on your targeted platform(s). If you have a correct C++ compiler and do not mind the dynamic allocated memory if you use containers, I don't see any problem.

Nikko
+7  A: 

That depends on what you mean by embedded. On Atmel8 systems, there's very little ram. So little that you can't really have a reasonable malloc. You want, in this case, to manage memory very explicitly, probably with static arrays of the type you need. If you've got that, you basically have no need for most of the STL.

On arm systems, you've got plenty of ram. Use STL!

TokenMacGuy
+1 for pointing out that "embedded" covers a wide range of systems from tiny microcontrollers with a few hundred *bytes* of RAM all the way up to high end systems with many *GB* of RAM - as a rule of thumb STL may be appropriate for mid- to high-end embedded systems and probably not for low-end systems.
Paul R
It doesn't take *much* ram to have good results with dynamic collections. 10 kbytes is probably usable.
TokenMacGuy
+6  A: 

STL has quite a few problems with it(as documented here by EASTL), on an embedded system, or small scale system, the main problem is generally the way in which it manages (its) memory. a good example of this was the PSP port of Aquaria.

My advise though is first test, before following assumptions, if the test are shows your using just too much space/processor cycles, then maybe an optimization or two could push it into the realm of 'usable'.

Finally, boost is template based, so if your looking at the size of generated template code, it'll suffer the same as STL.

Edit/Update:

To clear up my last statement (which was just refering to boost VS STL). In C, you can (ab)use the same code to do the same job on different structures sharing the same header (or layout), but with templates, each type might get its own copy (I've never test if any compilers are smart enough to do this if 'optimize for size' is enbaled), even though it exactly the same(on a machine/assembly level) as one thats just been generated. boost has the advantage of being a lot cleaner to read, and having far more things crammed into it, but that can lead to long compile times due to a copius amount of (somtimes huge) headers. STL gains because you can pass your project around and not require a download/accompanyment of boost.

Necrolis
@Necrolis Thanks for the link and the answer! I dont however understand "suffer the same as STL" i mean it(Boost as well as STL)generates only the code which is called for, so its a good thing right, did you mean "behave" instead of "suffer"?
Als
+1 for the link to EASTL. But I wonder how much the STL has changed since then
drhirsch
@drhirsch: the implementation may have change, but the memory management hasn't for the compilers I know of; ie the memory grabbed for the STL containers is not released and more may be grabbed that what you'd like. In memory constrained environment it can be annoying.
Matthieu M.
there is not a lot on algorithms in that link so it may be that STL <algorithms> used with static arrays are useable where containers aren't? everyone is focusing on the data-structures but thats only half the picture
jk
The "memory" idea is a bit flawed anyway. Why would you ever want to release memory on an embedded system? So the user can run another program? Not on an embedded system. And those embedded systems that are complex enough to run two apps tend to run real OS'es.
MSalters
@MSalters: that really depends on the embedded system and what its trying to do, even so, you'll still need a custom allocator for reserved memory sections, cause `::new` might start allocating things it shouldn't. @jk: thats a good point, I assumed that they didn't find problems with the algo's except for the few things mentioned in the paper. @Als: I'd say both work, depending on the situtation, if you have 5 structs, all the same, but using templates, the compiler might(probably) not coallate the code, I'll make it a little clearer in my answer :)
Necrolis
+2  A: 

I came across this presentation: Standard C++ for Embedded Systems Programming

the bulk of the complexity with templates is with the compiler than in the runtime system and that's partly where the problem lies - as we don't know for sure how much of an optimization the compiler is able to accomplish. In fact C++ code based on STL is supposed to be more compact and faster than C++ code not using templates and even C code!

kartheek
+1 for the optimizations, it might be worth mentioning though that the resulting library / binary might be larger, once again, one needs to measure.
Matthieu M.
A: 

For me, only good reason, not to use some library if it does not fit in limited constraints or its size can be problem later. If that is not problem for you, go for it. In any case you can not get better.

ralu
A: 

I haven't experienced any downside to using the STL in embedded systems and I plan to use it in my current project. Boost as well.

ExpatEgghead
+2  A: 

There is some logic behind the notion that templates lead to larger code. The basic idea is pretty simple: each instantiation of a template produces essentially separate code. This was particularly problematic with early compilers -- since templates (typically) have to be put into headers, all the functions in a template are inline. That means if you have (for example) vector<int> instantiated in 10 different files, you (theoretically) have 10 separate copies of each member function you use, one for each file in which you use it.

Any reasonably recent compiler (less than, say, 10 years old) will have some logic in the linker to merge these back together, so instantiating vector<int> across 10 files will only result in one copy of each member function you used going into the final executable. For better or worse, however, once it became "known" that templates produce bloated code, a lot of people haven't looked again to see whether it remained true.

Another point (that remains true) is that templates can make it easy to create some pretty complex code. If you're writing things on your own in C, you're generally strongly motivated to use the simplest algorithm, collection, etc. that can do the job -- sufficiently motivated that you're likely to check into details like the maximum number of items you might encounter to see if you can get away with something really simple. A template can make it so easy to use a general purpose collection that you don't bother checking things like that, so (for example) you end up with all the code to build and maintain a balanced tree, even though you're only storing (say) 10 items at most so a simple array with linear searches would save memory and usually run faster as well.

Jerry Coffin
Regarding the "bloat": even though less type safe, the use of `void*` means that a single version of the collections / functions exist whereas with template there is at least one instantiation per type it is instantiated with. I don't worry about it for server coding, but for memory constrained environment, it might be a bother.
Matthieu M.
Actually, many compilers (even MSVC++ for Win32, where bloat isn't a big worry) are able to fold multiple functions together, as long as they're binary identical. `std::list<int>` and `std::list<float>` are likely to share most members (if `sizeof(int)==sizeof(float)` of course).
MSalters
@MSalters: Yes, like I said, pretty nearly every C++ compiler within the last 10 years or so can do that. If memory serves, MS added it in VC++ 5.0, about 15 years ago or so (it might have even been earlier -- my memory of the timing isn't even close to clear anymore).
Jerry Coffin
@Jerry: I understood your answer to refer to the elimination of multiple instantiations of the same function in different TU's. That's pretty much necessary as a function must have a single address.
MSalters
@MSalters: Yes -- at least if memory serves, that was there even sooner (I believe VC++ 4.0, though the same warning about my memory stands).
Jerry Coffin
+2  A: 

As people have said there is a wide range of "embedded" systems. I'll give my perspective, which focuses on safety critical and hard real time systems.

Most guidelines for safety critical systems simply forbid the use of dynamic memory allocations. It is simply much easier and safer to design the program if you never have to worry about a malloc/new call failing. And for long running systems where heap fragmentation can occur, you can't easily prove that the memory allocation won't fail, even on a chip / system with large amounts of memory (especially when the device must run for years without restarting).

In scenarios where there are tight timing deadlines, the uncertainties involved in dynamic memory allocation and instantiation of complex objects are frequently too large to deal with. This is why many programmers who work in these areas stick with C. You can look at C source and guess how long an operation takes. With C++, it is easier for simple looking code to take longer than it appears to. Those who use C++ in such systems tend to stick to simple plain vanilla code. And code which usually is fast, but occasionally takes a long time to execute is worse than code that is slower but consistent.

What I've done in larger projects is isolate the real time and critical functions from the rest. The non-critical stuff can be written using standard tools like the STL. That's okay as long as the OS doesn't get in the way of the critical parts. And if I can't guarantee that there are no such interactions, then don't use the tools at all.

sbass
+1  A: 

I was on an embedded project that used C++ and STL in a very constrained system (memory in a fraction of a megabyte, ARMv4 at low speed). For the most part, STL was great, but there were parts that we had to skip (for example, std::map required 2-4k of code per instantiation [which is a big number relative to our ROM size], and we had our own custom replacement for std::bitset [it was maybe ~1k ROM]). But, std::vector and std::list were very helpful, as was using boost::intrusive_ptr for reference counting (shared_ptr was way too big, about 40 bytes RAM per object!).

The one downside to using STL is that you have no error recovery when exceptions are turned off (which they were for us, as exceptions and RTTI were not cheap on our compiler). For example, if a memory allocation failed somewhere in the code in this line (std::map object):

my_map[5] = 66;

you wouldnt see it and the code would just silently keep moving forward; chances are the object is now in a broken state, but you wouldnt crash until much later on.

That being said, we had great success with C++ and STL. As another poster said, try it out on your system and measure which parts of STL work. As a side note, there's a great technical report on C++ performance in general that is a good read: http://www.open-std.org/jtc1/sc22/wg21/docs/TR18015.pdf

Jared Grubb
+1  A: 

It depends on the nature of the embedded system.

Such a system may have a few kilobytes of RAM (or less), or it may have many megabytes or even gigabytes. So memory constraints may or may not be an issue.

If the system has real-time constraints, some parts or usages of STL may not be suited to some parts of your application. Container classes rely heavily on dynamic memory allocation, reallocation and object copying, and this is most often highly non-deterministic, so when used in time-critical code, you have no way of guaranteeing the meeting of deadlines.

That is not to say that it STL cannot be used, even in real-time applications. But you need to design the code carefully so that you will know that some non-deterministic operation will not occur during a time critical process.

Clifford