tags:

views:

308

answers:

6

Hi! I was wondering, how to optimize loops for systems with very limited resources. Let's say, if we have a basic for loop, like ( written in javascript):


for(var i = someArr.length - 1; i > -1; i--) { someArr[i] }


I honestly don't know, isn't != cheaper than > ?

I would be grateful for any resources covering computing cost in context of basic operators, like the aforementioned, >>, ~, !, and so on.

+1  A: 

Most comparations have same coast, because the processor simply compares it in all aspects, then after that it takes a decision based on flags generated by this previous comparation so the comparation signal doesn't matter at all. But some architectures try to accelerate this process based on the value you are comparing with, like comparations against 0.

As far as I know, bitwise operations are the cheapest operations, slightly faster than addition and subtraction. Multiplication and division operations are a little more expensive, and comparation is the highest coast operation.

Havenard
+9  A: 

It is extraordinarily unlikely that such micro-optimizations will make a noticeable difference to your code in any but the most extreme (real time embedded systems?) circumstances. Your time would probably be better served worrying about making your code readable and maintainable.

When in doubt, always begin by asking Donald Knuth:

http://shreevatsa.wordpress.com/2008/05/16/premature-optimization-is-the-root-of-all-evil/

Or, for a slightly less high-brow take on micro-optimization:

http://www.codinghorror.com/blog/archives/000185.html

Matt Nizol
Missing the point of Knuth's quote. Knuth only said it was a bad idea around 97% of the time, implying that it is meaningful and useful 3% of the time.It is not "extraordinarily unlikely" that microoptimization will make a noticeable difference. That happens all the time. It just means that unless you know *what* to optimize, you're likely to hit the 97% where it doesn't achieve a noticeable difference.
jalf
Excellent point. In my defense, I did say "such micro-optimizations" rather than "all micro-optimizations" as the original poster was concerned about the difference between != and > in a javascript for loop.
Matt Nizol
Why optimize Javascript? http://www.6502asm.com/ <= This is a 6502 emulator and assembler in Javascript app! Really cool and obviously the result of sick mind with way too much time on there hands, but still REALLY cool. (Alot of fun things are the result of a sick, so please take no offense.)
NoMoreZealots
A: 

Premature Optimization can be dangerous the best approach would be to write your application without worrying about that and then find the slow points and optimize those. If you are really worried about this use a lower level language. An interpreted language like javascript will cost you some processing power when compared to a lower level language like C.

RHicke
+6  A: 

Performance on a modern CPU is far from trivial. Here are a couple of things that complicate it:

  • Computers are fast. Your CPU can execute upwards of 6 billion instructions per second. So even the slowest instruction can be executed millions of times per second, meaning that it only really matters if you use it very often
  • Modern CPU's have hundreds of instructions in flight simultaneously. They are pipelined, meaning that while one instruction is being read, another is reading from registers, a third one is executing, and a fourth one is writing back to a register. Modern CPU's have 15-20 of such stages. On top of this, they can execute 3-4 instructions at the same time on each of these stages. And they can reorder these instructions. If the multiplication unit is being used by another instruction, perhaps we can find an addition instruction to execute instead, for example. So even if you have some slow instructions mixed in, their cost can be hidden very well most of the time, by executing other instructions while waiting for the slow one to finish.
  • Memory is hundreds of times slower than the CPU. The instructions being executed don't really matter if their cost is dwarfed by retrieval of data from memory. And even this isn't reliable, because the CPU has its own onboard caches to attempt to hide this cost.

So the short answer is "don't try to outsmart the compiler". If you are able to choose between two equivalent expressions, the compiler is probably able to do the same, and will pick the most efficient one. The cost of an instruction varies, depending on all the above factors. Which other instructions are executing, what data is in the CPU's cache, which precise CPU model is the code running on, and so on. Code that is super efficient in one case may be very inefficient in other cases. The compiler will try to pick the most generally efficient instructions, and schedule them as well as possible. Unless you know more than the compiler about this, you're unlikely to be able to do a better job of it.

Don't try such microoptimizations unless you really know what you're doing. As the above shows, low-level performance is a ridiculously complex subject, and it's very easy to write "optimizations" that result in far slower code. Or which just sacrifice readability on something that makes no difference at all.

Further, most of your code simply doesn't have a measurable impact on performance. People generally love quoting (or misquoting) Knuth on this subject:

We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil

People often interpret this as "don't bother trying to optimize your code". If you actually read the full quote, some much more interesting consequences should become clear:

Most of the time, we should forget about microoptimizations. Most code is executed so rarely that optimizations won't matter. Keeping in mind the number of instructions a CPU can execute per second, it is obvious that a block of code has to be executed very often for optimizations in it to have any effect. So about 97% of the time, your optimizations will be a waste of time. But he also says that sometimes (3% of the time), your optimizations will matter. And obviously, looking for those 3% is a bit like looking for a needle in a haystack. If you just decide to "optimize your code" in general, you're going to waste your time on the first 97%. Instead, you need to first locate the 3% that actually need optimizing. In other words, run your code through a profiler, and let it tell you which code takes up the most CPU time. Then you know where to optimize. And then your optimizations are no longer premature.

jalf
Superlative response. Were I the OP, I would accept this answer.
Matt Nizol
Thanks, that pretty much sums it up, considering the approach. I'll have to profile my code against some real-case array and just see.Modern CPU's are fast, but CPU's in embedded systems are far from that. Hence my question.Cheers!
Kosmotaur
Wait a second. You're running javascript on an embedded system and you're worried about performance?
jalf
Widgets for cell phones written in Javascript.
Kosmotaur
A: 

In this particular case, > vs = is probably not a perfomance issue. HOWEVER > is generally safer choice because prevents cases where you modified the code from running off into the weeds and stuck in a infinite loop.

NoMoreZealots
A: 

That's like asking for a fish, when I would rather teach you to fish.

There are simple ways to see for yourself how long things take. My favorite is to just copy the code 10 times, and then wrap it in a loop of 10^8 times. If I run it and look at my watch, the number of seconds it takes translates to nanoseconds.

Saying don't do premature optimization is a "don't be". If you want a "do be" you could try a proactive performance tuning technique like this.

BTW my favorite way of coding your loop is:

for (i = N; --i >= 0;){...}
Mike Dunlavey