If you open this function in the debugger, with code compiled in debug mode:
bool foo(string arg)
{
return bar(arg);
}
There are 3 break points you can set:
- At the opening brace of the function.
- On the "return" line.
- At the closing brace of the function.
Setting a break point on the opening brace means "break when this function get's called". That's why there is a no-op instruction at the beginning of the method. When the breakpoint is set on the opening brace the debugger actually sets it on the no-op.
Setting a break point on the closing brace means "break when this function exits". In order for this to happen the function needs to have a single return instruction in it's IL, where the break point can be set. The compiler enables that by using a temporary variable to store the return value, and converting
return retVal;
into
$retTmp = retVal;
goto exit;
and then injecting the following code at the bottom of the method:
exit:
return $ret;
Also, when in debug mode, compilers are stupid about the code they generation.They basically do something like:
GenerateProlog();
foreach (var statement in statements)
{
Generate(statement);
}
GenerateEpilog();
In your case, you are seeing:
return foo(arg);
being translated into:
; //this is a no-op
bool retTemp = false;
retTemp = foo(arg);
goto exit;
exit:
return retTemp;
If the compiler was doing a "sliding window optimization" it might be able to look at that code and realize there was some redundency.However, compilers generally don't do that in debug mode. Compiler optimizations can do things like eliminate variables, and reorder instructions, which makes debugging difficult. Since the purpose of a debug build is to enable debugging, it would not be good to turn on optimizations.
In a release build, the code will not look like that. That's because the compiler does not introduce the special code to enable breakpoints on the opening and closing braces, which just leaves the following to be compiled:
return bar(arg);
That ends up looking pretty simple.
One thing to note, however, is that I don't think the C# compiler does much sliding window optimizations, even in retail builds. Thats because most of those optimizations depend on the underlying processor architecture and so are done by the JIT compiler. Doing the optimizations, even the ones that are processor agnostic, in the C# compiler can impede the JIT's ability to optimize the code (it's looking for patterns that are generated by non-optimized code generation, and if it sees heavily optimized IL it can get confused). So usually manged code compilers don't do them. It does some "expensive things" (that the JIT doesn't want to do at runtime), like dead code detection, and live variable analysis, but they don't address the problems solved by sliding window optimization.