views:

71

answers:

3

I came across through a presentation(dalvik-vm-internals) on Dalvik VM, in that it is mentioned as for the below loops, we have use (2) and (3) and to avoid (7).

(1) for (int i = initializer; i >= 0; i--)

(2) int limit = calculate limit; for (int i = 0; i < limit; i++)

(3) Type[] array = get array; for (Type obj : array)

(4) for (int i = 0; i < array.length; i++)

(5) for (int i = 0; i < this.var; i++)

(6) for (int i = 0; i < obj.size(); i++)

(7) Iterable list = get list; for (Type obj : list)

Comments: i feel that (1) and (2) are the same. (3) (4) every time it has to calculate the length of array, so this can be avoided (5) (6) same as (4), calculating the size everytime (7) asked to avoid as list is of an Iterable type??

one more, in case if we have infinite data(assume data is coming as a stream) which loop should we consider for better efficiency?)

request you to please comment on this...

A: 

With infinite data, none of the examples are good enough. Best would be to do

for(;;) {
   list.poll(); //handle concurrency, in java for example, use a blocking queue
}
krico
does it have any difference if we use... while(1) {list.poll();} ?
Vamsi
It shouldn't make a difference, no.
krico
A: 

If that's what they recommend, that's what they've optimized the compiler and VM for. The ones you feel are the same aren't necessarily implemented the same way: the compiler can use all sorts of tricks with data and path analysis to avoid naively expensive operations. For instance, the array.length() result can be cached since arrays are immutable.

They're ranked from most to least efficient: but (1) is 'unnatural'. I agree, wouldn't you? The trouble with (7) is that an iterator object is created and has to be GC'ed.

Note carefully when the advice should be heeded. It's clearly intended for bounded iteration over a known collection, not the stream case. It's only relevant if the loop has significant effect on performance and energy consumption ('operating on the computer scale'). The first law of optimization is "Don't optimize". The second law (for experts) is "Don't optimize, yet.". Measure first (both execution times and CPU consumption), optimize later: this applies even to mobile devices.

What you should consider is the preceding slides: try to sleep as often and as long as possible, while responding quickly to changes. How you do that depends on what kind of stream you're dealing with.

Finally, note that the presentation is two years old, and may not fully apply to 2.2 devices where among other things JIT is implemented.

Pontus Gagge
Yes, i think i simplified!! but does it have any difference between (1) and (2), because of the post-decrement and post-increment?
Vamsi
See my revision. Comparison to an arbitrary integer is usually implemented as a subtraction and comparison to zero, hence (1) has a very slight edge in performance over (2), but requires us to think backwards (and makes the code a touch less maintainable). Go for (1) only if you **know** there's a performance bottleneck or CPU power hotspot in that code. And even then...
Pontus Gagge
Thanks!! that was a very good explanation.
Vamsi
A: 

1) and 2) are really different. 2) need an extra subtraction to compute i=0 doesn't. Even better, on most processor (and well optimized code) no is comparison needed for i>=0. The processor can use the the negative flag, resulting for the last decrement (i--).

So the end of the loop -1 looks like (in pseudo assembler)

--i
jump-if-neg

while loop #2

++i
limit-i  # set negative flag if i >limit
jump-if-neg

That doesn't make a big difference, except if the code in your loop is really small (like basic C string operation) That might not work with interpreted languages.

mb14