Okay, so I've got a piece of Python code which really needs optimizing.
I need to iterate over every single pixel of a small (80x60) image and extract the RGB values from it. The code in the loop itself isn't too slow, but I'm doing it using nested for loops, which I assume add quite a bit of overhead...
xr = xrange(80)
yr = xrange(60)
get_at = surface.get_at()
set_at = surface.set_at()
for x in xr:
for y in yr:
pixelR = get_at((x,y))[0]
pixelG = get_at((x,y))[1]
pixelB = get_at((x,y))[2]
# ... more complex stuff here which changes R,G,B values independently of each other
set_at((x,y),(pixelR,pixelG,pixelB))
It really doesn't seem like the right solution at all... I'd rather swap out those for loops for the faster map() c function, but if I do that I can't figure out how I can get the x,y values, nor the local values defined out of the scope of the functions I'd need to define.
Would using map() be any faster than this current set of for loops? How could I use it and still get x,y?
P.S I'm using pygame surfaces, and I've tried the surfarray/pixelarray modules, but since I'm changing/getting every pixel, it's a lot slower than get_at() and set_at().
Also, slightly irrelevant... do you think this could be made quicker if Python wasn't traversing a list of numbers but just incrementing a number, like in other languages? Why doesn't python include a normal for() as well as their foreach()?
I suppose I should've posted the whole function, here:
# xr, yr = xrange(80), xrange(60)
def live(surface,xr,yr):
randint = random.randint
set_at = surface.set_at
get_at = surface.get_at
perfect = perfectNeighbours #
minN = minNeighbours # All global variables that're defined in a config file.
maxN = maxNeighbours #
pos = actual # actual = (80,60)
n = []
append = n.append
NEIGHBOURS = 0
for y in yr: # going height-first for aesthetic reasons.
decay = randint(1,maxDecay)
growth = randint(1,maxGrowth)
for x in xr:
r, g, b, a = get_at((x,y))
del n[:]
NEIGHBOURS = 0
if x>0 and y>0 and x<pos[0]-1 and y<pos[1]-1:
append(get_at((x-1,y-1))[1])
append(get_at((x+1,y-1))[1])
append(get_at((x,y-1))[1])
append(get_at((x-1,y))[1])
append(get_at((x+1,y))[1])
append(get_at((x-1,y+1))[1])
append(get_at((x+1,y+1))[1])
append(get_at((x,y+1))[1])
for a in n:
if a > 63:
NEIGHBOURS += 1
if NEIGHBOURS == 0 and (r,g,b) == (0,0,0): pass
else:
if NEIGHBOURS < minN or NEIGHBOURS > maxN:
g = 0
b = 0
elif NEIGHBOURS==perfect:
g += growth
if g > 255:
g = 255
b += growth
if b > growth: b = growth
else:
if g > 10: r = g-10
if g > 200: b = g-100
if r > growth: g = r
g -= decay
if g < 0:
g = 0
b = 0
r -= 1
if r < 0:
r = 0
set_at((x,y),(r,g,b))
The amount of conditionals there probably makes things slower too, right? The slowest part is checking for neighbours (where it builds the list n)... I replaced that whole bit with slice access on a 2D array but it doesn't work properly.