You need to understand the mathematical theory of permutation cycles, also known as "orbits" (it's important to know both "terms of art" since the mathematical subject, the heart of combinatorics, is quite advanced, and you may need to look up research papers which could use either or both terms).
For a simpler introduction to the theory of permutations, wikipedia can help. Each of the URLs I mentioned offers reasonable bibliography if you get fascinated enough by combinatorics to want to explore it further and gain real understanding (I did, personally -- it's become somewhat of a hobby for me;-).
Once you understand the mathematical theory, the code is still subtle and interesting to "reverse engineer". Clearly, indices
is just the current permutation in terms of indices into the pool, given that the items yielded are always given by
yield tuple(pool[i] for i in indices[:r])
So the heart of this fascinating machinery is cycles
, which represents the permutation's orbits and causes indices
to be updated, mostly by the statements
j = cycles[i]
indices[i], indices[-j] = indices[-j], indices[i]
I.e., if cycles[i]
is j
, this means that the next update to the indices is to swap the i-th one (from the left) with the j-th one from the right (e.g., if j
is 1, then the last element of indices
is being swapped -- indices[-1]
). And then there's the less frequent "bulk update" when an item of cycles
reached 0 during its decrements:
indices[i:] = indices[i+1:] + indices[i:i+1]
cycles[i] = n - i
this puts the i
th item of indices
at the very end, shifting all following items of indices one to the left, and indicates that the next time we come to this item of cycles
we'll be swapping the new i
th item of indices
(from the left) with the n - i
th one (from the right) -- that would be the i
th one again, except of course for the fact that there will be a
cycles[i] -= 1
before we next examine it;-).
The hard part would of course be proving that this works -- i.e., that all permutations are exhaustively generated, with no overlap and a correctly "timed" exit. I think that, instead of a proof, it may be easier to look at how the machinery works when fully exposed in simple cases -- commenting out the yield
statements and adding print
ones (Python 2.*), we have
def permutations(iterable, r=None):
# permutations('ABCD', 2) --> AB AC AD BA BC BD CA CB CD DA DB DC
# permutations(range(3)) --> 012 021 102 120 201 210
pool = tuple(iterable)
n = len(pool)
r = n if r is None else r
if r > n:
return
indices = range(n)
cycles = range(n, n-r, -1)
print 'I', 0, cycles, indices
# yield tuple(pool[i] for i in indices[:r])
print indices[:r]
while n:
for i in reversed(range(r)):
cycles[i] -= 1
if cycles[i] == 0:
print 'B', i, cycles, indices
indices[i:] = indices[i+1:] + indices[i:i+1]
cycles[i] = n - i
print 'A', i, cycles, indices
else:
print 'b', i, cycles, indices
j = cycles[i]
indices[i], indices[-j] = indices[-j], indices[i]
print 'a', i, cycles, indices
# yield tuple(pool[i] for i in indices[:r])
print indices[:r]
break
else:
return
permutations('ABC', 2)
Running this shows:
I 0 [3, 2] [0, 1, 2]
[0, 1]
b 1 [3, 1] [0, 1, 2]
a 1 [3, 1] [0, 2, 1]
[0, 2]
B 1 [3, 0] [0, 2, 1]
A 1 [3, 2] [0, 1, 2]
b 0 [2, 2] [0, 1, 2]
a 0 [2, 2] [1, 0, 2]
[1, 0]
b 1 [2, 1] [1, 0, 2]
a 1 [2, 1] [1, 2, 0]
[1, 2]
B 1 [2, 0] [1, 2, 0]
A 1 [2, 2] [1, 0, 2]
b 0 [1, 2] [1, 0, 2]
a 0 [1, 2] [2, 0, 1]
[2, 0]
b 1 [1, 1] [2, 0, 1]
a 1 [1, 1] [2, 1, 0]
[2, 1]
B 1 [1, 0] [2, 1, 0]
A 1 [1, 2] [2, 0, 1]
B 0 [0, 2] [2, 0, 1]
A 0 [3, 2] [0, 1, 2]
Focus on the cycles
: they start as 3, 2 -- then the last one is decremented, so 3, 1 -- the last isn't zero yet so we have a "small" event (one swap in the indices) and break the inner loop. Then we enter it again, this time the decrement of the last gives 3, 0 -- the last is now zero so it's a "big" event -- "mass swap" in the indices (well there's not much of a mass here, but, there might be;-) and the cycles are back to 3, 2. But now we haven't broken off the for loop, so we continue by decrementing the next-to-last (in this case, the first) -- which gives a minor event, one swap in the indices, and we break the inner loop again. Back to the loop, yet again the last one is decremented, this time giving 2, 1 -- minor event, etc. Eventually a whole for loop occurs with only major events, no minor ones -- that's when the cycles start as all ones, so the decrement takes each to zero (major event), no yield
occurs on that last cycle.
Since no break
ever executed in that cycle, we take the else
branch of the for
, which returns. Note that the while n
may be a bit misleading: it actually acts as a while True
-- n
never changes, the while
loop only exits from that return
statement; it could equally well be expressed as if not n: return
followed by while True:
, because of course when n
is 0
(empty "pool") there's nothing more to yield after the first, trivial empty yield
. The author just decided to save a couple of lines by collapsing the if not n:
check with the while
;-).
I suggest you continue by examining a few more concrete cases -- eventually you should perceive the "clockwork" operating. Focus on just cycles
at first (maybe edit the print
statements accordingly, removing indices
from them), since their clockwork-like progress through their orbit is the key to this subtle and deep algorithm; once you grok that, the way indices
get properly updated in response to the sequencing of cycles
is almost an anticlimax!-)