The most popular answers are O(N) (good!-) but, as @Paul and @Mark point out, they require the list's items to be hashable. Both @Paul and @Mark's proposed approaches for unhashable items are general but take O(N squared) -- i.e., a lot.
If your list's items are not hashable but are comparable, you can do better... here's an approach that always work as fast as feasible given the nature of the list's items.
import itertools
def allunique(L):
# first try sets -- fastest, if all items are hashable
try:
return len(L) == len(set(L))
except TypeError:
pass
# next, try sort -- second fastest, if items are comparable
try:
L1 = sorted(L)
except TypeError:
pass
else:
return all(len(list(g))==1 for k, g in itertools.groupby(L1))
# fall back to the slowest but most general approach
return all(v not in L[i+1:] for i, L in enumerate(L))
This is O(N) where feasible (all items hashable), O(N log N) as the most frequent fallback (some items unhashable, but all comparable), O(N squared) where inevitable (some items unhashable, e.g. dicts, and some non-comparable, e.g. complex numbers).
Inspiration for this code comes from an old recipe by the great Tim Peters, which differed by actually producing a list of unique items (and also was so far ago that set
was not around -- it had to use a dict
...!-), but basically faced identical issues.