views:

454

answers:

11

Question of the century? I basically want to know which would be more efficient if I wrote this code as several different variables or if I used small arrays.

int x = 34;
int y = 28;
int z = 293;

vs

double coordinate[3] = {34, 28, 293};

I have a coordinate struct which I will use in the following way:

typedef struct coordinates_t {
    double x = 0.0;
    double y = 0.0;
    double z = 0.0;

} coordinates;


typedef struct car_t {
    coordinates start; // car starting point
    coordinates location; // car current Location
    coordinates targCarVector; // Vector to car from target
    coordinates altitude; // Altitude of car
    coordinates distance; // Distance from car start to current position
} car;

I'll need to do things like:

distance = car1.location - car1.start;

If I don't use an array I'll have to have a lot of lines of code, but if I use an array I'll have to use loops. Are arrays and loops more memory/cpu intensive? I am basically trying to see which is the most efficient way of writing this code.

Thanks, DemiSheep

+20  A: 

Is efficiency more important that maintainability and readability? The answer is no. Even if you have a time-critical application, you will spend 90% of the time in under 10% of the code, and so only that 10% needs to be coded as efficiently as possible.

If you haven't measured and found which 10% is the culprit, you almost certainly will be optimising code which doesn't take much runtime in the first place. This is a waste of time.

Philip Potter
+1 Spot on. Premature optimization is the root of all evil.
Carl Manaster
*We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil. Yet we should not pass up our opportunities in that critical 3%.A good programmer will not be lulled into complacency by such reasoning, he will be wise to look carefully at the critical code; but only after that code has been identified* -- Donald Knuth
Paul Nathan
@Paul Nathan: The master always says it better than I could.
Philip Potter
@Paul: You're more-or-less right, but these ideas drive me nuts, with blithe aphorisms about 10% and 3%. It seems that's what people say because they've heard other people say it. Maybe it makes sense in little 1-page academic algorithms where the call stack never gets deeper than 2. In real-world mega-liners, more than 90% of the time (in my experience) the problem is 1-line function call sites invoking massive trunks of call tree, needlessly, due to over-general data structure design.
Mike Dunlavey
@Mike: that's a problem of data structure and algorithm design, not a problem of code optimisation such as this question is talking about. (I'm sure Knuth picked the number 97% out of thin air - the actual number is not important. And I'm sure Knuth knows about mega-liners.)
Philip Potter
I put some of my experience performance tuning in this answer. (http://stackoverflow.com/questions/926266/performance-optimization-strategies-of-last-resort/927773#927773) It is very unlike most people's understanding of optimization.
Mike Dunlavey
@Mike: While I agree that some people (usually those who treat C++ as _just another OO language_ mindlessly abstract without giving enough thought to efficiency), IME more people (often those who treat C++ as _a better C_) mindlessly slaughter abstractions (which are very important in multi-MLoC projects) on the altar of efficiency, without gaining it. _C++ shines bright where you need to_ __combine__ _abstractions with efficiency.__ Just look at the STL, IMO a master piece of combining them. But that means catering to C++'s real strength, not treating it as pure OO or better C.
sbi
In a multi-MLoC project I had been working on for several years, the number of crashing bugs nicely correlated to the coding style in different parts of the application. Where developers manually fiddled with memory and other resources, crashes and logic errors were reported from the testers and the field. Where developers had used smart pointers and other abstractions, only logic errors were reported. Speed was a prime target (the application could spend half an hour analyzing 2MB of input data) gained mainly through delay-loading data and storing less of it (aka _algorithmic changes_).
sbi
By loading one index file instead of 1000 data files (and delay loading those data files later in the background), I could speed up my part to the point where the application's startup time dropped below 20%. You'll never get this through sacrificing abstractions. In fact, the opposite was the case: This significantly increased the complexity of my share (~100kLoC) of the code.
sbi
@sbi: I can certainly see how those measures would help, and I'm not against abstraction, used intelligently. In our group, we have good people who talk the line about profiling, but when there's a performance problem, I'm the one who finds it, in a few pauses. Nearly always it is because OO encourages people to create unnormalized data structure and try to keep it consistent with notifications, and those things ripple way beyond what they expected.
Mike Dunlavey
@Mike: Yes, as I said, treating C++ as _just another OO language_ is stupid. If you want that, there's better ones out there. If you need C++, it's because you need more.
sbi
A: 

As always, you'll need to profile your code to be sure.

Having said that, I'd suggest going for an array and loops - you code should be more concise/maintainable and the compiler should be able to do a good job at optimising out / unrolling all the little constant-sized loops which is effectively what you would be doing by hand if you used x,y,z co-ordinates for every vector.

On a completely unrelated note I see you car has altitude. Is it a flying car? If so then definitely +1 for the cool application.

mikera
+2  A: 

Use an array, compile with -funroll-loops. You get the benefits of both.

Sjoerd
+7  A: 
sbi
"use a profiler to find the exact spots where the program spends too much time" Too many people think this means just "where the *program counter* spends too much time, thereby missing poorly-justified function calls and I/O which, in big software in my experience, are dominant by far.
Mike Dunlavey
@Mike: Hence my "Measurable run-time improvements usually come from algorithmic changes".
sbi
Mike Dunlavey
@Mike: Yeah, see my comment to the other answer. Pre-loading an index of the data instead of loading the actual data... to me that's an algorithmic change.
sbi
+2  A: 

Compilers can "unroll" loops if they think it will help. So the compiler might quietly replace the following code:

for (i = 0; i < 3; ++i) {
    c[i] = a[i] - b[i];
}

with:

c[0] = a[0] - b[0];
c[1] = a[1] - b[1];
c[2] = a[2] - b[2];

The compiler will take its best guess as to whether it's worth doing this, taking into account the costs of the operations involved and the optimisation flags you provide.

There is no simple answer to "which one will be faster?", but if there was you can be sure that with maximum optimisation, your compiler would use it.

Steve Jessop
A: 

Go for the more correct way - which would be using loops and arrays - neither of which result in more memory usage (less usage, as the memory required by all those car1, car2, car3... instructions will be more) - and CPU usage-wise you're look at the tiniest of differences.

Will A
+1  A: 
John Bode
+7  A: 

The first question is: Do you want to optimize it? Most probably, you don't want to. At least not if you "always code as if the guy who ends up maintaining your code will be a violent psychopath who knows where you live." Readability, clarity of intent and maintainability always come first.

The second question is: Is it worth optimizing? In 97% it isn't, according to Donald Knuth - and you don't question Knuth, do you? Another common rule-of-thumb is the 80/20 rule, i.e. 80% of execution time is spent in 20% of the code. If you optimize at all, first profile to know where to optimize. If you guess, you're wrong. Period.

The third question is: CAN you optimize it? No you can't, at least not this easily. You think you're smarter than the hundreds of programmers who wrote your compiler over many decades? You aren't. If the actual implementation of your algorithm and data structures can be optimized, you can assume your compiler can do that on its own. The compiler can do loop unrolling, instruction reordering, combining variables with non-overlapping lifetime, struct layout optimization, and much much more - and in this era, it's even better than most assembly programmers in most cases. And even if there's a bit of potential, you better focus on implementing a better algorithm. No compiler can turn O(n^2) into O(n log n), but maybe a smart computer scientist did it and you can implement his algorithm to get much better performance than any microoptimization can yield.

delnan
The third question gets very arguable in many languages that aren't, say C. E.g., BASIC. :-) /pedantic
Paul Nathan
Yeah, but you can always sit and wait for a sufficently smart compiler :) If one cares so much about about speed that not even optimized algorithms (which are known to have more influence than, well, *micro*-optimization) are enough, you should propably rewrite it in C or similar anyway.
delnan
@delnan: Technically, yes, but there might be, hmm, *pointy haired* type externalities that prevent you from selecting a different language.
Paul Nathan
@DemiSheep: Wow, this got accepted over Philip's +19 post?
delnan
"you don't question Knuth, do you?". I would say, always question "authority". Knuth puts on his shoes one at a time, just like the rest of us.
Mike Dunlavey
@Mike Dunlavey: Hey, I'm the last to insist on authority. But certain persons (Knuth e.g.) earned their authority, which doesn't mean you have to obey, but you should think twice before you disregard his advice. Besides, it was mainly a joke ;)
delnan
@delnan: I went the whole academic route, and, no doubt, Knuth is a star. That said, the ivory tower cuts people off from the real world in qualitative ways so academics, no matter how smart, tend to live in an echo chamber, skewing their perception. They can't help it. But I understand your point. Cheers.
Mike Dunlavey
@delnan @Mike let's not forget that, in context, it's clear Knuth pulled the 97% number out of his ass. We shouldn't follow Knuth's words blindly, we should value Knuth because his ideas help us think more clearly.
Philip Potter
+3  A: 

If you really want to micro-optimize this, use the SIMD instruction capabilities of your CPU. If you are using an x86 platform, you can use MMX or SSE instructions to do vector arithmetic instead of adding each part of the coordinate individually (your compiler may not generate these without special command-line switches or inline assembly). This will probably amount to a larger speedup than switching between individual variables and an array. I say "probably" because there is no way to tell for sure without trying it both ways and measuring the execution time.

bta
A: 

Please profile your code and find out what's the main problem for the inefficiency. Efficiency can be measured by runtime execution of code.

Some tools to do so are available as opensource like gprof.

Praveen S
+1  A: 

The answer of the century is

Don't put the cart before the horse.

In other words, profile first.

Everybody "knows" this, but a large category of questions on SO are of the form "Which is faster, X or Y?"

This elicits guesses, and when you're concerned about performance, guesses aren't worth much, because if you have a performance problem, it is probably somewhere else entirely.

Mike Dunlavey
+1. It's better to learn how to measure performance for yourself, than to ask for performance rules of thumb from others.
Philip Potter
@Philip: Thx. I'm afraid I've made myself a pain in the neck discoursing on this subject, as in: http://stackoverflow.com/questions/1777556/alternatives-to-gprof/1779343#1779343
Mike Dunlavey