tags:

views:

650

answers:

7

I wrote some code in C and I was surprised to see it take a longer time to execute than I expected. I want to know which operations are costly and how to get rid of them.

I use assignment statements, conditional(nested), loops, function calls and call backs.

What are some good references to common C performance pitfalls?

Is there a good profiler I can use?


Thank you all

Thanks for all your inputs. You are absolutely right: It's algorithms that can slow things (drastically). Though a little performance gain can be achieved by coding practices - I am 100% convinced that only faulty algorithm can drastically slow down things.

As a matter of fact: I was working on RB trees and inserting nodes in ascending order. It took enormous time (as bad as Binary Search Tree (Skewed) ). After seeking your advice I checked the algorithm where I made a mistake in balancing which made the tree leaning (skewed). I corrected it.

Thanks once again for the suggestions.

+18  A: 

Your performance issues probably have more to do with the algorithms you implement than with the operations you use.

Posting the code might be useful. Telling us something about what you're trying to do, and what algorithm you're using would also help. As it is, your question doesn't provide enough information for anyone to give you a helpful answer.

Other people recommended gprof - I second that, if you're interested in profiling your code. I've also used VTune before, and liked it. But first make sure you understand your code and what it does, and that the algorithm you're implementing is time efficient when dealing with the size of data you expect it to handle.

As an aside, using C doesn't mean that your code would automatically run faster. I/O bound code would see no performance improvement, typically. UI heavy code might not benefit from using a low level language. Typically, C is a better implementation language where you need low level access, when interfacing with hardware or low level operating system services, or if you have very specific and stringent performance requirements that would be difficult to meet in a high level, garbage collected language. Or if you happen to like C, but that's obviously a subjective matter.

Ori Pessach
+1  A: 

It might be worth showing your C code to make it easier for us to spot the problems.

kevchadders
A: 

Do you have access to the GNU toolchain? If so, check out "gprof". It'a a profiler ... good for finding bottlenecks.

eduffy
unfortunately i am working with MSVS.
FL4SOF
+1  A: 

Check for memory allocations. And function calls. If you are using gcc use a -pg option to compile it with profiling information and run it through gprof. VS Team System edition comes with a Profiler of its own. So, take your pick.

dirkgently
+1  A: 

It's impossible to tell. None of the elements you name are really slow, and even if they were, it wouldn't automatically mean the whole program would be slow because of them.

You'd better run your code with profiling enabled and see which parts are the most costly. (It depends on your platform how would you actually do it).

For information on MSVC, see this post or this blog entry about profiling under MSVS or even this question, and particularly the AMD CodeAnalyst answer

jpalecek
+2  A: 

don't waste time trying to find 'expensive' operations. there's almost none in C, besides libraries, of course.

instead, try to estimate how many times you're executing each part of your code. for example, say you're comparing each line of a file, with each line of another one. if each file has around a hundredth lines, you'll do around ten thousand comparisons. nothing to worry... but if you pick each line counting from the beginning of the file, you'll read each line half a million times. now that's no good. you'll need some really random-access way to read each line.... or, even better, read about hashing

in 'big O' notation: a full set comparison is O(n x m), or roughly O(n^2) if n and m are similar. but sequentially reading is O(n/2) on average, so the whole thing is O(n^3/2) on reading plus O(n^2) on comparing. with hashing it would be aO(2n)+bO(2n)+cO(n^2), or just O(n^2)

optimise algorithms, not code.

Javier
"don't waste time trying to find 'expensive' operations. there's almost none in C, besides libraries, of course." - Dead right. thank you very much.
FL4SOF
+2  A: 

This is a well-worn subject.

Profiling is one option, but there's a couple old-fashioned techniques that work surprisingly well, if you have a debugger:

  • If it won't take all day, single-step the code all the way through. I guarantee you will get a very good idea if it's doing anything it doesn't really need to.

  • If that would take too long, just give it enough data, or have the program repeat itself at the top level, so that it runs a nice long time, at least several seconds. While it is running, manually interrupt it and take note of exactly what it is doing and why. Do this several times. Guaranteed, you will get the same insight you could get from single-stepping it.

Don't do what most people do. What most people do is 1) talk bravely about profiling, and then 2) guess what the problem is and fix that. If you are scrounging around for "quick operations" you are missing the point. You will never fix the right thing until you prove what it is by one of the investigations above.

explained on WikiHow

a good explanation on SO

Mike Dunlavey