views:

108

answers:

3

I was wondering if there is a way to run map on something. The way map works is it takes an iterable and applies a function to each item in that iterable producing a list. Is there a way to have map modify the iterable object itself?

A: 

You can use a lambda (or a def) or better list comprehension (if it is sufficient):

[ do_things_on_iterable for item in iterable ]

Anyway you may want to be more explicit with a for loop if the things become too much complex.

For example you can do something like, that but imho it's ugly:

[ mylist.__setitem__(i,thing) for i,thing in enumerate(mylist) ]
pygabriel
+2  A: 

Just write the obvious code to do it.

for i, item in enumerate(sequence):
    sequence[i] = f(item)
Paul Hankin
I already tried that...oddly it's slower than the list comprehension. The reason I asked this question is because I figured map would be quicker than the list comprehension way, and also I don't see the point in creating a new list, but so far the list comprehension is winning.
list comprehension == syntax sugar for map
carl
How do you explain the performance hit then? map was about twice as slow.
Stop caring about unimportant minor variations in speed. If you want to mutate the list, mutate the list, either with Paul's method above or cvondrick's function. Don't create a new list just because it happens to be microseconds faster.
jemfinch
@cvondrick List comprehensions are syntactical sugar for a for loop, not for map.
jemfinch
@johannix The performance difference is because calling functions (especially lambdas) is slow(ish) in Python. Not that you should worry about that.
jemfinch
@jemfinch If it were microseconds I clearly wouldn't care. I'm working on a big list, and the times we're talking about are seconds in difference, which clearly matters...
@johannix: You need to post full code then. The bottleneck is not in the map. If you must, parallelize it with multiprocessing.Pool.map.
carl
@johannix All the more reason to use this solution or cvondrick's, since the listcomp solution allocates way more memory and thus significantly limits the size of the lists you can process.
jemfinch
@cvondrick: just checked out the multiprocessing stuff. Didn't know about it. Surprisingly in my case parallelizing doesn't seem to help much. I did some quick tests and where the parallelizing paid off was when the processing you did on each item is very time intensive.
+2  A: 

It's simple enough to write:

def inmap(f, x):
    for i, v in enumerate(x):
            x[i] = f(v)

a = range(10)
inmap(lambda x: x**2, a)
print a
carl
That is logically correct, but I wanted to use map for performance gains...
I don't think you can squeeze much more performance out of this. If you want to parallelize it, you wouldn't be able to do it in place with Python due to the GIL.
carl