views:

274

answers:

7

After reading a question here about what things our computer could do in one second I made a little test I had in mind for a while and I'm very surprised by the results. Look:

Simple program to catch a null exception, takes almost one second to do 1900 iterations:

for(long c = 0; c < 200000000; c++)
{
    try
    {
        test = null;
        test.x = 1;
    }
    catch (Exception ex)
    {
    }
}

Alternatively, checking if test == null before doing the assignation, the same pogram can do aprox 200000000 iterations in one second.

for(long c = 0; c < 1900; c++)
{
    test = null;
    f (!(test == null))
    {
        test.x = 1;
    }
}

Anyone has a detailed explanation on why this HUGE diference ?

EDIT: Running the test in Release mode, outside Visual studio i'm getting 35000-40000 iterations vs 400000000 iterations (always aprox)

Note I'm running this with a crappy PIV 3.06Ghz

+11  A: 

There's no way that should take a second for 1900 iterations unless you're running in the debugger. Running performance tests under the debugger is a bad idea.

EDIT: Note that this isn't a case of changing to the release build - it's a case of running without the debugger; i.e. hitting Ctrl-F5 instead of F5.

Having said that, provoking exceptions when you can avoid them very easily is also a bad idea.

My take on the performance of exceptions: if you're using them appropriately, they shouldn't cause significant performance issues unless you're in some catastrophic situation anyway (e.g. you're trying to make hundreds of thousands of web service calls and the network is down).

Exceptions are expensive under debuggers - certainly in Visual Studio, anyway - due to working out whether or not to break into the debugger etc, and probably doing any amount of stack analysis which is unnecessary otherwise. They're still somewhat expensive anyway, but you shouldn't be throwing enough of them to notice. There's still stack unwinding to do, relevant catch handlers to find, etc - but this should only be happening when something's wrong in the first place.

EDIT: Sure, throwing an exception is still going to give you fewer iterations per second (although 35000 is still a very low number - I'd expect over 100K) because you're doing almost nothing in the non-exception case. Let's look at the two:

Non-exception version of the loop body

  • Assign null to variable
  • Check whether variable is null; it is, so go back to the top of the loop

(As mentioned in the comments, it's quite possible that the JIT will optimise this away anyway...)

Exception version:

  • Assign null to variable
  • Dereference variable
    • Implicit check for nullity
    • Create an exception object
    • Check for any filtered exception handlers to call
    • Look up the stack for the catch block to jump to
    • Check for any finally blocks
    • Branch appropriately

Is it any wonder that you're seeing less performance?

Now compare that with the more common situation where you do a whole bunch of work, possibly IO, object creation etc - and maybe an exception is thrown. Then the difference becomes a lot less significant.

Jon Skeet
The tag says it's c#.
devoured elysium
@devoured elysium: It does now; it didn't when I answered.
Jon Skeet
True! I was running it under VS. Now is at 35000 iterations but the diference is still huge...the other test goes up to 400000000 iterations under the same conditions...
Drevak
the other test looks like it might very easily but optimised to no op. you also have to bare inmind that exceptions are designed to be exceptional i.e. you really should also be comparing what happens when you don't throw
jk
+2  A: 

Check out Chris Brumme's blog with special attention to the Performance and Trends section for an explanation on why exceptions are slow. They are called 'exceptions' for a reason: they should not happen very often.

Gonzalo
+2  A: 

You might also find this popular question helpful: How slow are .NET exceptions?

DOK
+1  A: 

An optimization that the compiler performs and i believe it could be "dead code elimination"; also depending on the compiler you are using the latter program is actually doing what assembler folk call a "no-op".

Raymond Tay
If that's a no-op, then 400 million iterations per second is rather slow. Could be; I guess it depends on how slow long increments are on a 32-bit machine.
Eamon Nerbonne
+1  A: 

In my tests the "exceptional" code is not that slow - much slower, but not that much. The differece lies in creating the Exception (or, to be specific, NullReferenceException) object. The slowest part in it is retrieving the string for exception message - there's internal call to GetResourceString - and getting the stack trace.

mYsZa
See my comment in Jhon Skeet's answer.what are your numbers?
Drevak
+2  A: 

There is another factor here. IF you have the .pdb file in place in the executing directory, then when the exception is thrown, the .NET runtime will read the .pdb file to get the code line number to include in the exception stack trace. This takes up quite a bit of time. Try your first method (the one with an exception) with and without the .pdb file in the executing directory.

I had done a simple timing test with and without the .pdb in place as an answer to another question, here.

rally25rs
A: 

This is an awful micro benchmark.

The latter 'optimized' loop has as a compile time invariant that test is always null, thus there is no need to even bother compiling in the attempted assignment. You are in effect testing an empty loop with throwing an exception every time.

A really good jit might even be able to entirely remove the loop, noting that the loop has no body, thus no side effects beyond incrementing the counter and that the counter itself is unused (this is unlikely since such an optimization would have little utility in the real world).

Exceptions are reasonably expensive to throw (in relation to conventional branching control flow)[1] due mainly to 3 things:

  1. all exceptions are reference types and thus (for now) are heap allocated and subsequently garbage collected.
  2. The stack levels populated into the exception (This is proportional to the The distance the stack unwinds - something you example fails completely to measure)
  3. Going into the exception handling code skips all the nice things like branch prediction that let today's deeply pipelined processor keep themselves doing something useful

Throwing and catching exceptions within a tight loop is almost certainly a massively flawed design anyway but if you seek to measure this impact you should write a loop that actually does that.


  1. expensive here being a very relative term. You can still do tens of thousands of them per second on modest hardware.
ShuggyCoUk