views:

2483

answers:

8

This issue is important especially for embedded development. Exception handling adds some footprint to generated binary output. On the other hand, without exceptions the errors need to be handled some other way, which requires additional code, which eventually also increases binary size.

I'm interested in your experiences, especially:

  1. What is average footprint added by your compiler for the exception handling (if you have such measurements)?
  2. Is the exception handling really more expensive (many say that), in terms of binary output size, than other error handling strategies?
  3. What error handling strategy would you suggest for embedded development?

Please take my questions only as guidance. Any input is welcome.

+4  A: 

I guess it'd depend on the hardware and toolchain port for that specific platform.

I don't have the figures. However, for most embedded developement, I have seen people chucking out two things (for VxWorks/GCC toolchain):

  • Templates
  • RTTI

Exception handling does make use of both in most cases, so there is a tendency to throw it out as well.

In those cases where we really want to get close to the metal, setjmp/longjmp are used. *Note, that this isn't the best solution possible (or very powerful) probably, but then that's what we use.*

You can run simple tests on your desktop with two versions of a benchmarking suite with/without exception handling and get the data that you can rely on most.

Another thing about embedded development: templates are avoided like the plague -- they cause too much bloat. Exceptions tag along templates and RTTI as explained by Johann Gerell in the comments (I assumed this was well understood).

Again, this is just what we do. What is it with all the downvoting?

dirkgently
"Exception handling does make use of both in most cases" -- I am pretty certain I have never used templates with exception handling, and I am not sure that RTTI is necessary either. Can you explain this claim in a bit more detail? Please, educate me :)
Magnus Hoff
RTTI is required for exception handling to work, namely with catch() blocks. Remember, dynamic_cast does not work without RTTI either.
rlbond
@ Magnus: Have you ever thrown or caught a std::exception or derivative thereof? Then you've likely dragged along std::string, which is std::basic_string<char> - a template class.
Johann Gerell
With setjmp you lose stack unwinding. This is pretty much a deal breaker with C++.
Jimmy J
Templates are fixed at compiletime afaik, so the perfomance ovhead here should be 0. However, I can see it's got some potential to chuck a lot of memory for code-storage if used extensibly by a lot of different classes.
cwap
@Meeh: That's what is called code-bloat.
dirkgently
It's not so simple to test with the exception handling on and off. When you write code without exceptions you usually need to handle the errors some other way (which requires additional code).
oo_olo_oo
Templates code bloat can be limited by the compiler. If it's good it can optimize (merge) generated classes and functions.
oo_olo_oo
@o-l-o: I did not mean that you should run it on your software, which is why I suggested `benchmarking` code. It serves as a proof-of-concept.
dirkgently
@o-l-o: The problem with compilers on RTOS/specialized hardware is that they aren't as good as their desktop OS cousins. This is from direct experience.
dirkgently
@Johann: Haha! True. Thanks :)
Magnus Hoff
+4  A: 

In my opinion exception handling is not something that's generally acceptable for embedded development.

Neither GCC nor Microsoft have "zero-overhead" exception handling. Both compilers insert prologue and epilogue statements into each function that track the scope of execution. This leads to a measurable increase in performance and memory footprint.

The performance difference is something like 10% in my experience, which for my area of work (realtime graphics) is a huge amount. The memory overhead was far less but still significant - I can't remember the figure off-hand but with GCC/MSVC it's easy to compile your program both ways and measure the difference.

I've seen some people talk about exception handling as an "only if you use it" cost. Based on what I've observed this just isn't true. When you enable exception handling it affects all code, whether a code path can throw exceptions or not (which makes total sense when you consider how a compiler works).

I would also stay away from RTTI for embedded development, although we do use it in debug builds to sanity check downcasting results.

Andrew Grant
GCC defaults to zero-cost compilation which has no time overhead but adds space overhead - you should be clear which you are talking about.
bias
It's not so simple to test with the exception handling on and off. When you write code without exceptions you usually need to handle the errors some other way (which requires additional code).
oo_olo_oo
anyway. but it does certainly not add any code into the prologue/epilogue (at least on modern GCC versions). the handling is completely on the callers' side. the caller just calls the proper throw functions (in the throwing branch)
Johannes Schaub - litb
This is interesting. I found that MSVC++9 when optimising will add the prologue/epilogue code only when absolutely necessary -- that is, if you have a local object with non-trivial destructor occurring in a block that later calls a function that cannot be proven to be nothrow (e.g. a C++ function in another translation unit). Nevertheless, it's a cost that you pay whether or not you do actually throw.
j_random_hacker
+3  A: 

I work in a low latency environment. (sub 300 microseconds for my application in the "chain" of production) Exception handling, in my experience, adds 5-25% execution time depending on the amount you do!

We don't generally care about binary bloat, but if you get too much bloat then you thrash like crazy, so you need to be careful.

Just keep the binary reasonable (depends on your setup).

I do pretty extensive profiling of my systems.
Other nasty areas:

Logging

Persisting (we just don't do this one, or if we do it's in parallel)

windfinder
Isn't the amount of exceptions you normally do "zero"? Exceptions are supposed to be exceptional, not GOTOs.
Jimmy J
I mean just adding the exception handling.
windfinder
Did you compile the same code with and without exception handling enabled and see that difference in performance? What compiler did you use? If you're using GCC I'd suspect that the performance difference is due to the space overhead expanding the binary to not fit in cache or some side effect like that, not the exception handling itself.
Joseph Garvin
+8  A: 

When an exception occurs there will be time overhead which depends on how you implement your exception handling. But, being anecdotal, the severity of an event that should cause an exception will take just as much time to handle using any other method. Why not use the highly supported language based method of dealing with such problems?

The GNU C++ compiler uses the zero–cost model by default i.e. there is no time overhead when exceptions don't occur.

Since information about exception-handling code and the offsets of local objects can be computed once at compile time, such information can be kept in a single place associated with each function, but not in each ARI. You essentially remove exception overhead from each ARI and thus avoid the extra time to push them onto the stack. This approach is called the zero-cost model of exception handling, and the optimized storage mentioned earlier is known as the shadow stack. - Bruce Eckel, Thinking in C++ Volume 2

The size complexity overhead isn't easily quantifiable but Eckel states an average of 5 and 15 percent. This will depend on the size of your exception handling code in ratio to the size of your application code. If your program is small then exceptions will be a large part of the binary. If you are using a zero–cost model than exceptions will take more space to remove the time overhead, so if you care about space and not time than don't use zero-cost compilation.

My opinion is that most embedded systems have plenty of memory to the extent that if your system has a C++ compiler you have enough space to include exceptions. The PC/104 computer that my project uses has several GB of secondary memory, 512 MB of main memory, hence no space problem for exceptions - though, our micorcontrollers are programmed in C. My heuristic is "if there is a mainstream C++ compiler for it, use exceptions, otherwise use C".

bias
Just because there is a C++ compiler that supports exceptions for a platform does not mean it is a good idea. In FIRST Robotics there is plenty of space for exception handling, it is disabled since it is a Robot and throwing errors in VxWorks tasks would kill the whole system.
X-Istence
I agree that the time footprint for exception handling in most cases doesn't matter. I'm also follower of using exceptions, especially because of cleaner code that is developed with them.
oo_olo_oo
@X-Istence Hence, my question to o-|-o "What type of embedded system (e.g. PC/104)? What OS are you running? [...]" Of course, the answer will depend on the setup since there is so much variety in embedded devices. However, I wanted to present my opinion.
bias
[Continued] And, part of my opinion it that real-time OS (e.g. VxWorks) is overrated. In my experience, most people don't actually determine that they have hard real-time requirements. Nor, do they perform latency testing to prove that a *nix system can't handle their soft requirement.
bias
[Continued] Unless you have a strict government requirement, or an thoroughly justified internal one, it's best to default on the side of simplicity and safety. Then, move to more complex designs after empirical justification. Hence, start with *nix and gcc with exceptions. Migrate away as needed.
bias
+8  A: 

Measuring things, part 2. I have now got two programs. The first is in C and is compiled with gcc -O2:

#include <stdio.h>
#include <time.h>

#define BIG 1000000

int f( int n ) {
    int r = 0, i = 0;
    for ( i = 0; i < 1000; i++ ) {
     r += i;
     if ( n == BIG - 1 ) {
      return -1;
     }
    }
    return r;
}

int main() { 
    clock_t start = clock();
    int i = 0, z = 0;
    for ( i = 0; i < BIG; i++ ) {
     if ( (z = f(i)) == -1 ) { 
      break;
     }
    }
    double t  = (double)(clock() - start) / CLOCKS_PER_SEC;
    printf( "%f\n", t );
    printf( "%d\n", z );
}

The second is C++, with exception handling, compiled with g++ -O2:

#include <stdio.h>
#include <time.h>

#define BIG 1000000

int f( int n ) {
    int r = 0, i = 0;
    for ( i = 0; i < 1000; i++ ) {
     r += i;
     if ( n == BIG - 1 ) {
      throw -1;
     }
    }
    return r;
}

int main() { 
    clock_t start = clock();
    int i = 0, z = 0;
    for ( i = 0; i < BIG; i++ ) {
     try {
      z += f(i); 
     }
     catch( ... ) {
      break;
     }

    }
    double t  = (double)(clock() - start) / CLOCKS_PER_SEC;
    printf( "%f\n", t );
    printf( "%d\n", z );
}

I think these answer all the criticisms made of my last post.

Result: Execution times give the C version a 0.5% edge over the C++ version with exceptions, not the 10% that others have talked about (but not demonstrated)

I'd be very grateful if others could try compiling and running the code (should only take a few minutes) in order to check that I have not made a horrible and obvious mistake anywhere. This is knownas "the scientific method"!

anon
More proof that 90% of all statistics are made up on the spot...
Jimmy J
Would you add sample output as well please? I understand the code can be compiled and checked, however having at least a sample idea would be nice!
X-Istence
I think it was not so much execution time as memory footprint that was the issue. If not exceptions are thrown the execution time should be slightly slower due to the overhead - as you demonstrated but the question is the memory print.
Anders K.
Yes, the exe size is much larger (58K vs 16K) for the exception version - whether this would be significant in a real application is hard to say. It seems like a lot of of overhead though - I could fit quite a bit of functioanlity into 42K of machine code!
anon
The exe size difference is so significant because this is very simple example. Probably most of the overhead is exception supporting functionality itself. The overhead would probably become less significant when original exe was 1 or 2M.
oo_olo_oo
the overhead is not constant. if you have more functions, you will get also more overhead i think. for every function, a record must be created that tells the runtime what registers live/are saved/where is the return address and so on.
Johannes Schaub - litb
I wonder if this comparison is fair. You should at least include stack-objects (or auto_ptr'd heap objects), that have to be destroyed whenever an exception is thrown! Don't know about the gcc/msvc internals, but probably those destructor calls that happen in exception cases do cost extra cpu time in the "no-exception" cases.
frunsi
+1  A: 

One thing to consider: If you're working in an embedded environment, you want to get the application as small as possible. The Microsoft C Runtime adds quite a bit of overhead to programs. By removing the C runtime as a requirement, I was able to get a simple program to be a 2KB exe file instead of a 70-something kilobyte file, and that's with all the optimizations for size turned on.

C++ exception handling requires compiler support, which is provided by the C runtime. The specifics are shrouded in mystery and are not documented at all. By avoiding C++ exceptions I could cut out the entire C runtime library.

You might argue to just dynamically link, but in my case that wasn't practical.

Another concern is that C++ exceptions need limited RTTI (runtime type information) at least on MSVC, which means that the type names of your exceptions are stored in the executable. Space-wise, it's not an issue, but it just 'feels' cleaner to me to not have this information in the file.

Michael
A: 

It's easy to see the impact on binary size, just turn off RTTI and exceptions in your compiler. You'll get complaints about dynamic_cast<>, if you're using it... but we generally avoid using code that depends on dynamic_cast<> in our environments.

We've always found it to be a win to turn off exception handling and RTTI in terms of binary size. I've seen many different error handling methods in the absence of exception handling. The most popular seems to be passing failure codes up the callstack. In our current project we use setjmp/longjmp but I'd advise against this in a C++ project as they won't run destructors when exiting a scope in many implementations. If I'm honest I think this was a poor choice made by the original architects of the code, especially considering that our project is C++.

Dan Olson
A: 

Define 'embedded'. On an 8-bit processor I would not certainly not work with exceptions (I would certainly not work with C++ on an 8-bit processor). If you're working with a PC104 type board that is powerful enough to have been someone's desktop a few years back then you might get away with it. But I have to ask - why are there exceptions? Usually in embedded applications anything like an exception occurring is unthinkable - why didn't that problem get sorted out in testing?

For instance, is this in a medical device? Sloppy software in medical devices has killed people. It is unacceptable for anything unplanned to occur, period. All failure modes must be accounted for and, as Joel Spolsky said, exceptions are like GOTO statements except you don't know where they're called from. So when you handle your exception, what failed and what state is your device in? Due to your exception is your radiation therapy machine stuck at FULL and is cooking someone alive (this has happened IRL)? At just what point did the exception happen in your 10,000+ lines of code. Sure you may be able to cut that down to perhaps 100 lines of code but do you know the significance of each of those lines causing an exception is?

Without more information I would say do NOT plan for exceptions in your embedded system. If you add them then be prepared to plan the failure modes of EVERY LINE OF CODE that could cause an exception. If you're making a medical device then people die if you don't. If you're making a portable DVD player, well, you've made a bad portable DVD player. Which is it?

Stephen Friederichs
It's much closer to DVD player than medical device. I'm aware of risk of incorrectly handling of thrown exception. But my case isn't so strict. Exceptions can be for example thrown when user data validation fails. Anyway, I don't agree that the exceptions are like GOTO :-)
oo_olo_oo
If you use RAII you can architect your code such that you can be relatively sure your machine is in a sane state. OTOH if you're working on a device that can kill people you shouldn't be coding a type unsafe language like C or C++ to begin with.
Joseph Garvin