views:

124

answers:

4

Preferable for x86-32 gcc implementation

+1  A: 

It depends on the particular case, but this is likely to vary substantially based on the platform, hardware, operating system, function, and function inputs. A general answer is "no." It also depends on what you mean by "time;" there is execution time and clock time, among other things.

The best way to determine how long something will take is to run it as best you can. If performance is an issue, profiling and perfecting will be your best bet.

Certain real-time systems place constraints on how long operations will take, but this is not specific to C.

WhirlWind
+9  A: 

Considering modern C compiler optimize like crazy, I think you'll find timings to be very situationally dependent. What would be a slow operation in one situation might be either optimized away to a faster operation, or the compiler might be able to use a faster 8 or 16 bit version of the same instruction, etc.

Paul Tomblin
+1 for situaiton dependence. Still, relative timings for primitives are a good reference (as here for .NET CLR: http://msdn.microsoft.com/en-us/library/ms973852.aspx)
peterchen
+1  A: 

I don't think such a thing is really possible. When you consider the difference in time for the same program given different arguments. For example, assuming the function costOf did what you wanted, which costs more, memcpy or printf. Both?

costOf(printf("Hello World")) > costOf(memcpy(a, b, 4))
costOf(printf("Hello World")) < costOf(memcpy(a, b, 4 * 1024 * 1024 * 1024))
torak
I'd bet on OP meaning bigO notation of cost.
Pasi Savolainen
@Pasi Savolaninen -- But OP asked for x86-32 gcc implementations.
nategoose
A: 

IMHO, this is a micro optimization, which should be disregarded until all profiling has been performed. In general, library routines are not the consumers of execution time, but rather resources or programmer created functions.

I also suggest spending more time on a program's quality, and robustness rather than worrying about micro optimizations. With computing power increasing and memory sizes increasing, size and execution times are less of a problem to customers than quality and robustness. A customer is willing to wait for a program that produces correct output (or performs all requirements correctly) and doesn't crash rather than demanding a fast program that has errors or crashes the system.

To answer your question, as others have stated, the execution times of library functions depend upon the library developer, the platform (hardware) and the operating system. Some platforms can execute floating point instructions faster or in equal time to integral operations. Some libraries will delegate function to the operating system, while others will package their own. Some functions are slower because they are written to work on a variety of platforms, while the same functions in other libraries can be faster because they are tailored to the specific platform.

Use the library functions that you need and don't worry about their speed. Use 3rd party tested libraries rather than rewriting your own code. If the program is executing very slowly, review the design and profile. Perhaps you can gain more speed by using Data Oriented Design rather than Object Oriented Design or procedural programming. Again, concentrate your efforts on developing quality and robust code while learning how to produce software more efficiently.

Thomas Matthews
Sorry, but where's the optimization - micro or otherwise?
peterchen