views:

530

answers:

9

Hello,

I would like to filter this list,

l = [0,1,1,2,2]

to only leave,

[0].

I'm struggling to do it in a 'pythonic' way :o) Is it possible without nested loops?

+11  A: 

You'll need two loops (or equivalently a loop and a listcomp, like below), but not nested ones:

import collections
d = collections.defaultdict(int)
for x in L: d[x] += 1
L[:] = [x for x in L if d[x] == 1]

This solution assumes that the list items are hashable, that is, that they're usable as indices into dicts, members of sets, etc.

The OP indicates they care about object IDENTITY and not VALUE (so for example two sublists both worth [1,2,3 which are equal but may not be identical would not be considered duplicates). If that's indeed the case then this code is usable, just replace d[x] with d[id(x)] in both occurrences and it will work for ANY types of objects in list L.

Mutable objects (lists, dicts, sets, ...) are typically not hashable and therefore cannot be used in such ways. User-defined objects are by default hashable (with hash(x) == id(x)) unless their class defines comparison special methods (__eq__, __cmp__, ...) in which case they're hashable if and only if their class also defines a __hash__ method.

If list L's items are not hashable, but are comparable for inequality (and therefore sortable), and you don't care about their order within the list, you can perform the task in time O(N log N) by first sorting the list and then applying itertools.groupby (almost but not quite in the way another answer suggested).

Other approaches, of gradually decreasing perfomance and increasing generality, can deal with unhashable sortables when you DO care about the list's original order (make a sorted copy and in a second loop check out repetitions on it with the help of bisect -- also O(N log N) but a tad slower), and with objects whose only applicable property is that they're comparable for equality (no way to avoid the dreaded O(N**2) performance in that maximally general case).

If the OP can clarify which case applies to his specific problem I'll be glad to help (and in particular, if the objects in his are ARE hashable, the code I've already given above should suffice;-).

Alex Martelli
No I don't need to hash I think it's just the duplication of objects that I wanted to remove. (I'm still thinking in C but what I wanted to say above was that the pointers to the objects will be the same so there is no need to hash -- is that valid in Python land?)
boyfarrell
Why did you write "L[:] = list(set(L))" instead of the more obvious (to me) "L = list(set(L))"? They seem to do the same thing when I try them in the interpreter. Is there some nuance I'm missing? Thanks!
samtregar
The second solution doesn't seem to do the right thing: it removes duplicates, but the problem was to remove all items that are duplicated.
Ned Batchelder
@samtregar, rebinding just the name sometimes works just as well as rebinding the contents, sometimes it doesn't (because there are other outstanding references to the original list object beyond its original name -- e.g that's the case for function arguments), why risk it?
Alex Martelli
@Ned you're right, let me edit to fix, thanks.
Alex Martelli
Love collections.defaultdict. I need to write a bot that will answer all python questions with defaultdict.
hughdbrown
+8  A: 
[x for x in the_list if the_list.count(x)==1]

Though that's still a nested loop behind the scenes.

sepp2k
Yep, O(N**2) (while the non-nested approach I show is O(N)).
Alex Martelli
I think I prefer Alex's solution as it only iterates through the list twice, your solution is n^2.
Douglas Leeder
WOW. I really need to go away a read about list comprehension! I'm still trying to figure out exactly what is happening above. But thanks very much sepp and Alex.
boyfarrell
@boyfarrell: you can read that as "Go through all x in the_list and select those where `the_list.count(x)==1`, i.e. those that appear only once"
sepp2k
Yep, and my approach boils down to exactly the same, except that I do a single pass beforehand to compute how many times each object appears (so the overall approach is O(N)) instead of a counting pass _per item_ (which makes this approach overall O(N**2)).
Alex Martelli
Alex's solution may be faster, but this is more elegant I think. ;-)
Markus
I'm not going to vote sepp2k's solution down, but it is not the best solution. Alex's use of defaultdict and a list comprehension to filter is exactly right. It *is* elegant.
hughdbrown
+3  A: 
>>> l = [0,1,1,2,2]
>>> [x for x in l if l.count(x) is 1]
[0]
iElectric
Is there any advantage to using `is` over `==`? I know 1 is a small enough number for this to work, but is `is` actually faster when comparing integers?
Markus
You shouldn't use `is` with numbers, it only works because cpython optimizes some often used constants ( like small (<255) ints, 1.0, 0.0, empty tuples/sets, etc) and treats them as singletons ... but that is not part of the python *language*.
THC4k
A: 

Something like

import itertool
res = [elem[0] for elem in itertools.groupby(l) if elem[1].next() == 0]

should work. I do not know about its complexity, though.

Roberto Liffredo
Alas, the general idea only works if L is sorted (and not quite in this form anyway) in the loose sense that all repetition of each duplicated element are all adjacent to each other. groupby does not reorder the items...!
Alex Martelli
Yeah, it's the duplicated pairs (or possible triplets, quadruplets...) 'pointers' that I need to remove. In practice the list will actual contain reference to python objects. In this case I guess I am comparing the 'pointers' to decide what needs removing from the array. What to you call 'references to objects' in python. For example,l1 = [1,2,3]l2 = l1What are l1 and l2? In C (or objective-c) one could imagine they are pointers to the same object.
boyfarrell
"references to objects" is fine (in Python just like in Java "pointers" are not explicit though they're still used behind the scenes as "references"). But the referred-to objects still belong to types that define specific characteristics, so for example they may or may not be usable as indices into dicts.
Alex Martelli
+3  A: 
l = [0,1,2,1,2]
def justonce( l ):
    once = set()
    more = set()
    for x in l:
        if x not in more:
            if x in once:
                more.add(x)
                once.remove( x )
            else:
                once.add( x )
    return once

print justonce( l )
THC4k
Converting back to a list would be nice.
nilamo
+3  A: 

Here's another dictionary oriented way:

l = [0, 1, 1, 2, 2]
d = {}
for i in l: d[i] = d.has_key(i)

[k for k in d.keys() if not d[k]]
mhawke
+1 Really nice, it collects not a bit more information than you need to solve the task. You can get rid of the `.keys()` though.
THC4k
Yes, .keys() is entirely optional, but slightly more readable IMO.
mhawke
nice thinking 'cause it doesn't have unnecessary info like Alex's does, though it's essentially the same concept, and it won't work on items which aren't hashable.
Terence Honles
+1  A: 

In the same spirit as Alex's solution you can use a Counter/multiset (built in 2.7, recipe compatible from 2.5 and above) to do the same thing:

In [1]: from counter import Counter

In [2]: L = [0, 1, 1, 2, 2]

In [3]: multiset = Counter(L)

In [4]: [x for x in L if multiset[x] == 1]
Out[4]: [0]
Ryan
+1  A: 

I think the actual timings are kind of interesting:

Alex' answer:

python -m timeit -s "l = range(1,1000,2) + range(1,1000,3); import collections" "d = collections.defaultdict(int)" "for x in l: d[x] += 1" "l[:] = [x for x in l if d[x] == 1]"
1000 loops, best of 3: 370 usec per loop

Mine:

python -m timeit -s "l = range(1,1000,2) + range(1,1000,3)" "once = set()" "more = set()" "for x in l:" " if x not in more:" "  if x in once:" "   more.add(x)" "   once.remove( x )" "  else:" "   once.add( x )"
1000 loops, best of 3: 275 usec per loop

sepp2k's O(n**2) version, to demonstrate why compexity matters ;-)

python -m timeit -s "l = range(1,1000,2) + range(1,1000,3)" "[x for x in l if l.count(x)==1]"
100 loops, best of 3: 16 msec per loop

Roberto's + sorted:

python -m timeit -s "l = range(1,1000,2) + range(1,1000,3); import itertools" "[elem[0] for elem in itertools.groupby(sorted(l)) if elem[1].next()== 0]"
1000 loops, best of 3: 316 usec per loop

mhawke's:

python -m timeit -s "l = range(1,1000,2) + range(1,1000,3)" "d = {}" "for i in l: d[i] = d.has_key(i)" "[k for k in d.keys() if not d[k]]"
1000 loops, best of 3: 251 usec per loop

I like the last, clever and fast ;-)

THC4k
A: 
>>> l = [0,1,1,2,2]
>>> [x for x in l if l.count(x) == 1]
[0]
Juanjo Conti