views:

190

answers:

7

I am writing an application that I need to run at incredibly low processor speeds. The application creates and destroys memory in creative ways throughout its run, and it works just fine. I am wondering what compiler optimizations occur so I can try to build to that. One trick off hand is that the CLR handles arrays much faster than lists, so if you need to handle a ton of elements in a List, you may be better off calling ToArray() and handling it rather than calling ElementAt() again and again. I am wondering if there is any sort of comprehensive list for this kind of thing, or maybe the SO community can create one :-)

+2  A: 

If there is a lot of string manipulation in your application use StringBuilder instead of string. The performance of the application will increase considerably.

Also replace string concatenations (+ operator) with StringBuilder

This is specific for WinForm .NET. Turn off DataGridView.AutoSizeColumnsMode and AutoSizeRowMode.

HotTester
Always profile before and after! Switching to `StringBulder` will not always speed things up. The compiler is quite good at working with strings.
Mike Two
Absolutely right
ileon
I wouldn't say 'always'. See this interesting article for instance: http://www.heikniemi.net/hardcoded/2004/08/net-string-vs-stringbuilder-concatenation-performance/
Steven
+5  A: 

You possibly mean HIGH speeds, not low speeds.

Wrong langauge. For total optimization you need something lower level. Mostly not needed, though.

Note btw., that your indication of arrays and lists is wrong... the LIST is, depending on which you choose, a LINKED LIST, so has a different performance characteristics than an array. But that is not a CLR / Runtime thing.

Besides the StringBuilder - my main advice is: USE A PROFILER. Most people try to be smart with speed, but never profile, so spend a lot of time on useless optimizations later on - thy get faster, but bad bang for the buck. Find out first where the application actually spends the time.

TomTom
A: 

Let GC do its work and do not interfere with calling GC.Collect directly. This will hinder built in GC algorithm to run effectively in turn run slower and you call to GC.Collect can add unnecessary overhead too.

For GDI+ specific, call Invalidate to invalidate the client area or specific rectangle of a control or form instead of calling Refresh which call invalidate and update to repaint the control.

Fadrian Sudaman
+7  A: 

Build your system, run it, then attach a profiler to see what's slow. Then use SO, google, common sense to speed those areas up.

The most important thing is to not waste time speeding up things that actually don't matter, so profiling is very important.

Rob Fonseca-Ensor
+1  A: 

You are mentioning arrays being faster than List's. The CLR will actually do basic bounds checking for you when you are accessing an array. So, you will be able to gain a bit more performance by using the unsafe keyword, and then accessing the array using pointer arithmetic. Only do this if you actually need it - and you can measure a performance improvement for your specific scenario.

driis
+1  A: 

It is usually pretty rare that you need to go down to this level, but I'm doing something pretty similar at the moment; some thoughts:

  • do you have any buffers in regular use? Have you considered pooling them? i.e. so that instead of creating a new one you ask a pool to get (or create) one?
  • have you removed any reflection? (replacing with typed delegates, DynamicMethod, etc) - or taking it hardcore, Reflection.Emit?
  • have you considered unsafe? (used sparingly, in places where you have a measured need to do so)
  • (again, pretty low level) have you looked for anything silly in the IL? I recently found a lot of cycles (in some specific code) were being spent shuffling things around on the stack to pass in parameters appropriate. By changing the code (and, admittedly doing some custom IL) I've eliminated all the unnecessary (shuffling) stloc/ldloc, and removes almost all of the locals (reducing the stack size in the process)

But honestly, you need to profile here. Focus on things that actually matter, or where you know you have a problem to fix.

Marc Gravell
Are you sure that your custom IL even made a difference? The people who wrote the C# compiler might know that the JIT would handle that specific case.
Jørgen Fogh
@Jørgen - well, I need to use `ILGenerator` *anyway* due to the meta-programming nature of the task. Might as well squeeze them in. And in answer; the C# compiler would be *forced* to use extra locals in this scenario, taking *infinitesimally* more stack space. I'd be very surprised if the JIT unpicked it.
Marc Gravell
+1  A: 

When you ask about low-level optimizations (which nearly everyone does) you are basically guessing that those things will matter. Maybe at some level they will, but nearly everyone underestimates what can be saved by high-level optimization, which cannot be done by guessing.

It's like the proverbial iceberg, where what you can see is a tiny fraction of what's there.

Here's an example. I hesitate to say "use a profiler" because I don't.
Instead, I do this which, in my experience, works much better.

Mike Dunlavey