views:

235

answers:

2

The following MSIL code loads a single argument (a string), calls a method, which returns bool, and then returns that bool value. What I don't understand is why it calls stloc.0 to store the method's return value in a local variable, then performs an explicit unconditional control transfer to the very next labeled line (seems unnecessary), only to move the value right back onto the evaluation stack before returning it.

.maxstack 1
.locals init ([0] bool CS$1$0000)
L_0000: nop
L_0001: ldarg.0
L_0002: call bool FuncNameNotImporant::MethodNameNotImporant(string)
L_0007: stloc.0 
L_0008: br.s L_000a
L_000a: ldloc.0 
L_000b: ret

My best guess at why it does this is to perform some kind of type check to ensure the value on the evaluation stack is actually a boolean value before returning it. But I am clueless about the explicit jump to the very next line; I mean, wouldn't it go there anyway? The C# source code for the method is just one line, which returns the result of the method.

+2  A: 

Are you compiling in debug or release mode? In release mode I get:

.method private hidebysig static bool Test1(string arg) cil managed
{
    .maxstack 8
    L_0000: ldarg.0 
    L_0001: call bool FuncNameNotImportant::MethodNameNotImportant(string)
    L_0006: ret 
}

The branching you're seeing is probably for debugger support.

Adam
sounds reasonable to me
Simpzon
+2  A: 

If you open this function in the debugger, with code compiled in debug mode:

bool foo(string arg)
{
    return bar(arg);
}

There are 3 break points you can set:

  1. At the opening brace of the function.
  2. On the "return" line.
  3. At the closing brace of the function.

Setting a break point on the opening brace means "break when this function get's called". That's why there is a no-op instruction at the beginning of the method. When the breakpoint is set on the opening brace the debugger actually sets it on the no-op.

Setting a break point on the closing brace means "break when this function exits". In order for this to happen the function needs to have a single return instruction in it's IL, where the break point can be set. The compiler enables that by using a temporary variable to store the return value, and converting

return retVal;

into

$retTmp = retVal;
goto exit;

and then injecting the following code at the bottom of the method:

exit:
return $ret;

Also, when in debug mode, compilers are stupid about the code they generation.They basically do something like:

GenerateProlog();
foreach (var statement in statements)
{
    Generate(statement);
}
GenerateEpilog();

In your case, you are seeing:

return foo(arg);

being translated into:

; //this is a no-op
bool retTemp = false;
retTemp = foo(arg);
goto exit;
exit:
return retTemp;

If the compiler was doing a "sliding window optimization" it might be able to look at that code and realize there was some redundency.However, compilers generally don't do that in debug mode. Compiler optimizations can do things like eliminate variables, and reorder instructions, which makes debugging difficult. Since the purpose of a debug build is to enable debugging, it would not be good to turn on optimizations.

In a release build, the code will not look like that. That's because the compiler does not introduce the special code to enable breakpoints on the opening and closing braces, which just leaves the following to be compiled:

return bar(arg);

That ends up looking pretty simple.

One thing to note, however, is that I don't think the C# compiler does much sliding window optimizations, even in retail builds. Thats because most of those optimizations depend on the underlying processor architecture and so are done by the JIT compiler. Doing the optimizations, even the ones that are processor agnostic, in the C# compiler can impede the JIT's ability to optimize the code (it's looking for patterns that are generated by non-optimized code generation, and if it sees heavily optimized IL it can get confused). So usually manged code compilers don't do them. It does some "expensive things" (that the JIT doesn't want to do at runtime), like dead code detection, and live variable analysis, but they don't address the problems solved by sliding window optimization.

Scott Wisniewski