views:

144

answers:

2

in certain applications, I have need to collapse nested loops into one while retaining individual index information.

for j in N:
  for i in M:
    ... A(i,j) ...

// Collapse the loops
for ij in MN:
  ... A(i,j) ...

so have looked at the obvious ways to recover i,j from ij using division/modulo (expensive operation) and using if statements (breaks vectorization, branch prediction problems).in the end i came up with the following (using C-style comparisons):

j += (i == m)
i *= (i != m)
++i, ++ij

is there perhaps a even better way to do that? thanks

A: 

Going other way might be cheaper.

for j in N:
  for i in M:
    ij=j*i+j
ralu
or, `ij=0; for j in N: for i in M: ...; ij += 1`
Jason Orendorff
+7  A: 

In general, it offers no performance advantage to collapse the loop as described.

Compilers do sometimes collapse such loops, but typically in unexpected ways.

In particular languages, or on particular platforms, you can speed up loops in general by:

  • counting downwards
  • making the function called in the body 'inline', or having the code in the loop body rather than a separate function
  • configuring the compiler - typically via command-line options - to 'unroll' loops and to remove frame pointers and such

But in all cases you have to have profiled your code to see that such efforts are warranted.

Generally, in my experience, nested loops like this are dominated by:

  1. containers; avoid boxing and bounds checking if possible and you know you're safe
  2. the cost of invoking other methods in them; use 'inline' if thats available
  3. pipeline stalls by bad locality of reference; rearrange your memory if possible
  4. pipeline stalls by second conditions; fewer ifs and indirect references is better

But that might not be applicable advice on your problem domain and platform. Profile!

Will
++ Right, right, right. I would only say in place of Profile, use stackshots (http://stackoverflow.com/questions/406760/whats-your-most-controversial-programming-opinion/1562802#1562802) because almost always this kind of optimizing is "barking up the wrong tree".
Mike Dunlavey
Agree, programing is people, compiling is for computers
ralu
there is several reasons to do that: matrix/array iterators, optimization (only inner loops vectorized), cuda programming, OpenMP programming (3.0 has that). Compilers are very good but sometimes they need help from a human.
aaa
I think the OP should illustrate these cases then. Because I've never seen my cuda go short on registers for separate iterators for each dimension.
Will