Will a piece of code that potentially throws an exception have a degraded performance compared a similar code that doesn't, when the exception isn't thrown?
Try is cheap, catch is cheap, throw is expensive. There's obviously a little extra processing executing code that is wrapped inside a try.
Use exception for exceptional stuff - then the overhead won't matter.
This depends on your compiler. Some compiler/runtime combinations do extra work on entry to a block with catch handlers. Others build a static data structure and all the work happens at throw. The entry cost will be lower than throw in all cases, but you want to be cautious about catch block in inner loops. Measure the time cost with the compiler you care about.
It depends on the compiler, but the answer is almost certainly "yes". Specifically, if a scope contains an object with a non-trivial destructor, then that object will need to be registered with the runtime in order to call the destructor on an exception. For example:
struct Thing
{
~Thing();
void Process();
};
for (int i = 0; i < 1000000; ++i)
{
Thing thing;
thing.Process();
}
In addition to constructing and processing a million Things, this will also generate a million function calls to register and unregister each Thing in case the call to Process throws.
On top of this, there is a small overhead when entering or leaving try
blocks, as the corresponding catch
block is added to or removed from the stack of exception handlers.
Ya code with exception handling is slower and also larger as compared to code without exception handing.
Since it has to do bookkeeping for the objects to be destructed during stack unwinding process when exception is raised.
It has been demonstrated that it is possible to implement C++ exception handling mechanism with zero overhead in "normal" (not exception-related) code. However, in practice compilers usually stick to simpler implementations, which usually result in less efficient "normal" code. Compilers have to account for the possibility of a potential exception flying through the function hierarchy and therefore generate some additional household operations to enable proper stack unwinding if an exception is thrown. This extra household code affects the overall efficiency of the code regardless of whether an exception is ever thrown or not.
This is all a QoI (quality-of-implementation) issue. It is compiler specific. Check your compiler for more details. Some compilers actually offer and option to enable/disable C++ exceptions in order to make it possible to generate the most efficient code when exceptions are not used at all.
Since compiler needs to generate code that will inwind stack when exception is thrown, there is some added code behind the scenes. But it's debatable if it's considerably more, then:
code that is generated to automatically call destructors when variables go out of scope,
and code you would have to write to check exit status of every call and handle error.
What is expensive is catching errors: try ... catch statements and what happens when exception is thrown and caught:
keeping information about each place where try ... catch is added (also implicitly added e.g. around destructors or at exception specifications),
lot's of stack to unwind (and destructors to call) for something that looks like simple jump,
matching exception thrown to catch() clauses,
copying exceptions.
It depends; table-based implementations (which I believe modern g++ uses, and which is the strategy used for x64 binaries in Windows) are zero processing overhead for non-thrown exceptions (at the expense of marginally more memory usage). Function-based exception handling (which x86 Windows uses) incurs a small performance hit even for non-thrown exceptions.