views:

103

answers:

4

First off, code-readability goes out the window for this question. I'm all for code readability but speed comes first here.

When your code absolutely, without a doubt, no exceptions has to run as fast as possible in the .Net framework, what are some optimizations that can be done? I know there are flags for the compiler to optimize it and I've certainly seen improvements thanks to them... but what if that's not good enough? What are some tried and true techniques for scraping down to the fewest clock cycles necessary to get a process done?

+1  A: 

Turn off exceptions (i.e. dont use them)

Turn off bound checking

Use a profiler to find bottlenecks

edit: Also read this article written by myself, http://www.sumyan.nl/2010/06/08/multidimensional-array-performance/ It is a very specific optimization, also platform dependent but very little known and very useful.

Henri
Certainly avoid gratuitous try/catch blocks.
uncle brad
try-catch blocks themselves do not slow the code down...
BlueRaja - Danny Pflughoeft
try/catch blocks do not impose any overhead unless an exception is actually thrown. And once an exception is thrown, then the speed of the app suddenly matters a lot less.
Matt Greer
@Matt, Im not sure how it is in current versions, but my benchmarks showed that try..catch was a lot (+/- 5 times) slower in .NET 2.0
Henri
+2  A: 

Since .NET executables are distributed in IL form and JIT compiled the first time a block of code executes, the number of optimizations you can perform at source code compile time is limited. For one thing, you don't know at the time your source code is being compiled to IL what CPU architecture your code will be running on at runtime - 32 bit or 64 bit x86? ARM? Other?

To avoid experiencing a delay the first time your function is called and the JIT compiler kicks in, you should look into using the NGEN compiler to "precompile" the IL code in your managed assemblies into native machine code. The native code will be cached on the local hard disk.

Though NGEN has the potential to do more time-intensive code analysis and optimizations on the generated native machine code than the JIT can afford to do, the last time I checked NGEN does not perform any optimizations beyond what the JIT compiler does.

In the 1.0 .NET release, NGEN code could actually suffer a slight performance penalty because all calls out of module went through an indirect jump whereas the JIT compiler would encode the actual destination address in the call site. I believe this NGEN indirection has been removed in the .NET 2.0 and later platforms.

dthorpe
Mono also offers compiling down to native code. They call it AOT (Ahead-of-Time compilation). It destroys the cross-platform nature of your executable though.
Justin
+2  A: 

There's no silver bullet; to make your program run faster you're going to need to profile it to find out the bottlenecks.

Every app is different. It's only worth ngen-compiling if you are satisfied that your program is spending a significant amount of time being JIT compiled.

Dan
+3  A: 

I typically code for readability and maintainability, then refactor to remove bottlenecks once I find them. If you don't actually hunt down and find bottlenecks, then all of your optimization is premature. You could be making the code hard to read senselessly without providing any real benefit.

Brad Barker