views:

526

answers:

6
+4  A: 

the actual computation is so minimal that accurate measurements are very tricky. It looks to me like try catch might add a very small fixed amount of extra time to the routine. I would hazard to guess, not knowing anything about how exceptions are implemented in C#, that this is mostly just initialization of the exception paths and perhaps just a slight load on the JIT.

For any actual use, the time spent on the computation will so overwhelm the time spent fiddling with try-catch that the cost of try-catch can be taken as near zero.

Ben Hughes
+5  A: 

The JIT doesn't perform optimization on 'protected' / 'try' blocks and I guess depending on the code you write in try/catch blocks, this will affect your performance.

P.K
Why would that be the case?
usr
You can refer this link:http://msmvps.com/blogs/peterritchie/archive/2007/06/22/performance-implications-of-try-catch-finally.aspx
P.K
+1  A: 

Note that I only have Mono available:

// a.cs
public class x {
    static void Main() {
        int x = 0;
        x += 5;
        return ;
    }
}


// b.cs
public class x {
    static void Main() {
        int x = 0;
        try {
            x += 5;
        } catch (System.Exception) {
            throw;
        }
        return ;
    }
}

Disassembling these:

// a.cs
       default void Main ()  cil managed
{
    // Method begins at RVA 0x20f4
    .entrypoint
    // Code size 7 (0x7)
    .maxstack 3
    .locals init (
            int32   V_0)
    IL_0000:  ldc.i4.0
    IL_0001:  stloc.0
    IL_0002:  ldloc.0
    IL_0003:  ldc.i4.5
    IL_0004:  add
    IL_0005:  stloc.0
    IL_0006:  ret
} // end of method x::Main

and

// b.cs
      default void Main ()  cil managed
{
    // Method begins at RVA 0x20f4
    .entrypoint
    // Code size 20 (0x14)
    .maxstack 3
    .locals init (
            int32   V_0)
    IL_0000:  ldc.i4.0
    IL_0001:  stloc.0
    .try { // 0
      IL_0002:  ldloc.0
      IL_0003:  ldc.i4.5
      IL_0004:  add
      IL_0005:  stloc.0
      IL_0006:  leave IL_0013

    } // end .try 0
    catch class [mscorlib]System.Exception { // 0
      IL_000b:  pop
      IL_000c:  rethrow
      IL_000e:  leave IL_0013

    } // end handler 0
    IL_0013:  ret
} // end of method x::Main

The main difference I see is a.cs goes straight to ret at IL_0006, whereas b.cs has to leave IL_0013 at IL_006. My best guess with my example, is that the leave is a (relatively) expensive jump when compiled to machine code -- that may or may not be the case, especially in your for loop. That is to say, the try-catch has no inherent overhead, but jumping over the catch has a cost, like any conditional branch.

Mark Rushakoff
+1 for the effort but the for-loop is really the dominant factor here.
Henk Holterman
A: 

See discussion on try/catch implementation for a discussion of how try/catch blocks work, and how some implementations have high overhead, and some have zero overhead, when no exceptions occur.

Ira Baxter
+2  A: 

The try/catch/finally/fault block itself has essentially no overhead itself in an optimized release assembly. While there is often additional IL added for catch and finally blocks, when no exception is thrown, there is little difference in behavior. Rather than a simple ret, there is usually a leave to a later ret.

The true cost of try/catch/finally blocks occurs when handling an exception. In such cases, an exception must be created, stack crawl marks must be placed, and, if the exception is handled and its StackTrace property accessed, a stack walk is incurred. The heaviest operation is the stack trace, which follows the previously set stack crawl marks to build up a StackTrace object that may be used to display the location the error happened and the calls it bubbled through.

If there is no behavior in a try/catch block, then the extra cost of 'leave to ret' vs. just 'ret' will dominate, and there will obviously be a measurable difference. However, in any other situation where there is some kind of behavior in the try clause, the cost of the block itself will be entirely negated.

jrista
A: 

A difference of just 34 milliseconds is smaller than the margin-of-error for a test like this.

As you've noticed, when you increase the duration of the test that difference just falls away and the performance of the two sets of code is effectively the same.

When doing this sort of benchmark I try to loop over each section of code for at least 20 seconds, preferably longer, and ideally for several hours.

LukeH