views:

2453

answers:

12

There is a list L. It contains elements of arbitrary type each. How to delete all duplicate elements in such list efficiently? ORDER must be preserved

Just an algorithm is required, so no import any external library is allowed.

Related questions

+22  A: 

Assuming order matters:

  • Create an empty set S and an empty list M.
  • Scan the list L one element at a time.
  • If the element is in the set S, skip it.
  • Otherwise, add it to M and to S.
  • Repeat for all elements in L.
  • Return M.

In Python:

>>> L = [2, 1, 4, 3, 5, 1, 2, 1, 1, 6, 5]
>>> S = set()
>>> M = []
>>> for e in L:
...     if e in S:
...         continue
...     S.add(e)
...     M.append(e)
... 
>>> M
[2, 1, 4, 3, 5, 6]

If order does not matter:

M = list(set(L))
FogleBird
In your first solution, the set S is not necessary. You should be able to append elements from L M if they are not already in M. That does the same thing without requiring another data structure.
inspectorG4dget
The set S is necessary to make this algorithm O(n*log(n)), and not O(n^2). Searching for an element in a list is O(n), but it is O(1) in a Set.
David Crawshaw
this requires that the elements are all hashable, which is not true of all types in Python, so does not satisfy the requirement
newacct
what if some elements are not hashable?
psihodelia
If the elements are not hashable then you can implement your set using a search tree (as in the STL) and the algorithm will be O( n*log n).
Mike Ottum
Technically it's "near O(1)", which isn't quite the same thing. See my answer.
cletus
For the tree solution to work the elements must be mutually comparable. Only the "naive" n^2 algorithm requires only equality testing, which is the minimum assumption for any problem about uniqueness. (By the way, does the phrasing of the question suggest a homework problem?)
Randall Schulz
In-place removal is faster http://stackoverflow.com/questions/89178/in-python-what-is-the-fastest-algorithm-for-removing-duplicates-from-a-list-so-t/282589#282589
J.F. Sebastian
@David Crawshaw: searching a set is not O(1). Unless of course you design your own set such that all elements are known ahead of time; in this case you can use a perfect hash-function. In C++, by the way, searching a set is guaranteed to be O(log n).
wilhelmtell
+12  A: 

Special Case: Hashing and Equality

Firstly, we need to determine something about the assumptions, namely the existence of an equals and has function relationship. What do I mean by this? I mean that for the set of source objects S, given any two objects x1 and x2 that are elements of S there exists a (hash) function F such that:

if (x1.equals(x2)) then F(x1) == F(x2)

Java has such a relationship. That allows you to check to duplicates as a near O(1) operation and thus reduces the algorithm to a simple O(n) problem. If order is unimportant, it's a simple one liner:

List result = new ArrayList(new HashSet(inputList));

If order is important:

List outputList = new ArrayList();
Set set = new HashSet();
for (Object item : inputList) {
  if (!set.contains(item)) {
    outputList.add(item);
    set.add(item);
  }
}

You will note that I said "near O(1)". That's because such data structures (as a Java HashMap or HashSet) rely on a method where a portion of the hash code is used to find an element (often called a bucket) in the backing storage. The number of buckets is a power-of-2. That way the index into that list is easy to calculate. hashCode() returns an int. If you have 16 buckets you can find which one to use by ANDing the hashCode with 15, giving you a number from 0 to 15.

When you try and put something in that bucket it may already be occupied. If so then a linear comparison of all entries in that bucket will occur. If the collision rate gets too high or you try to put too many elements in the structure will be grown, typically doubled (but always by a power-of-2) and all the items are placed in their new buckets (based on the new mask). Thus resizing such structures is relatively expensive.

Lookup may also be expensive. Consider this class:

public class A {
  private final int a;

  A(int a) { this.a == a; }

  public boolean equals(Object ob) {
    if (ob.getClass() != getClass()) return false;
    A other = (A)ob;
    return other.a == a;
  }

  public int hashCode() { return 7; }
}

This code is perfectly legal and it fulfills the equals-hashCode contract.

Assuming your set contains nothing but A instances, your insertion/search now turns into an O(n) operation, turning the entire insertion into O(n2).

Obviously this is an extreme example but it's useful to point out that such mechanisms also rely on a relatively good distribution of hashes within the value space the map or set uses.

Finally, it must be said that this is a special case. If you're using a language without this kind of "hashing shortcut" then it's a different story.

General Case: No Ordering

If no ordering function exists for the list then you're stuck with an O(n2) brute-force comparison of every object to every other object. So in Java:

List result = new ArrayList();
for (Object item : inputList) {
  boolean duplicate = false;
  for (Object ob : result) {
    if (ob.equals(item)) {
      duplicate = true;
      break;
    }
  }
  if (!duplicate) {
    result.add(item);
  }
}

General Case: Ordering

If an ordering function exists (as it does with, say, a list of integers or strings) then you sort the list (which is O(n log n)) and then compare each element in the list to the next (O(n)) so the total algorithm is O(n log n). In Java:

Collections.sort(inputList);
List result = new ArrayList();
Object prev = null;
for (Object item : inputList) {
  if (!item.equals(prev)) {
    result.add(item);
  }
  prev = item;
}

Note: the above examples assume no nulls are in the list.

cletus
The method given by FogleBird is O(n), since `e in S`, `S.add` and `M.append` are all O(1)
gnibbler
Two downvotes? Would love to know why...
cletus
And FYI, I mention that O(1) case (for Java) but, like in Python, it's based on the assumption of there existing an equals-hashcode relationship, which is fine, but it's not the general case.
cletus
I was about to downvote based on your first sentence "if no ordering you're stuck with O(n^2)" b/c you can solve it with a hashtable. Then I saw your last section about the ArrayList of a HashSet and, well, there ya go. Maybe downvoters didn't read your whole response...?
Moishe
Your solution for *'General Case: Ordering'* doesn't preserve original order (OP requirement). btw, `prev = item` can be lifted to the `if` suite.
J.F. Sebastian
+7  A: 

If the order does not matter, you might want to try this algorithm written in Python:

>>> array = [1, 2, 2, 3, 3, 3, 4, 4, 4, 4, 5, 5, 5, 5, 5, 6, 6, 6, 6, 6, 6]
>>> unique = set(array)
>>> list(unique)
[1, 2, 3, 4, 5, 6]
Noctis Skytower
order does matter
psihodelia
+2  A: 

In java, it's a one liner.

Set set = new LinkedHashSet(list);

will give you a collection with duplicate items removed.

Kundan Singh
not what was asked for though... you do not wind up with the same List object minus the duplicates.
TofuBeer
@TofuBeer: It has the hint though.
Adeel Ansari
Not really... since it also loses the original order of the list...
TofuBeer
In case anyone else is confused the same way I was: TofuBeer made that comment before Peter edited the answer to use LinkedHashSet instead of the original HashSet.
Steve Jessop
and it still isn't "right" given that the list still contains the duplicates... :-P
TofuBeer
Sure, but since the questioner also asks for Haskell, in which mutable data is incredibly poor form, I'm not sure how seriously that "requirement" should be taken. You can take "delete some members" to mean "mutate the original", or you can take it to mean "create a new container excluding some elements". Even in the latter case, though, you should end up with a List, and this code doesn't do that. So it fails if this is a school assignment, but passes if the meat of the question is, "how do I uniqueify sequential data in Java without destroying the order?"
Steve Jessop
+2  A: 

For Java could go with this:

private static <T> void removeDuplicates(final List<T> list)
{
    final LinkedHashSet<T> set;

    set = new LinkedHashSet<T>(list); 
    list.clear(); 
    list.addAll(set);
}
TofuBeer
+6  A: 

in haskell this would be covered by the nub and nubBy functions

nub :: Eq a => [a] -> [a]
nub [] = []
nub (x:xs) = x : nub (filter (/= x) xs)

nubBy :: (a -> a -> Bool) -> [a] -> [a]
nubBy f [] = []
nubBy f (x:xs) = x : nub (filter (not.f x) xs)

nubBy relaxes the dependence on the Eq typeclass, instead allowing you to define your own equality function to filter duplicates.

These functions work over a list of consistent arbitrary types (e.g. [1,2,"three"] is not allowed in haskell), and they are both order preserving.

In order to make this more efficient, using Data.Map (or implementing a balanced tree) could be used to gather the data into a set (key being the element, and value being the index into the original list in order to be able to get the original ordering back), then gathering the results back into a list and sorting by index. I will try and implement this later.


import qualified Data.Map as Map

undup x = go x Map.empty
    where
        go [] _ = []
        go (x:xs) m case Map.lookup x m of
                         Just _  -> go xs m
                         Nothing -> go xs (Map.insert x True m)

This is a direct translation of @FogleBird's solution. Unfortunately it doesn't work without the import.


a Very basic attempt at replacing Data.Map import would be to implement a tree, something like this

data Tree a = Empty
            | Node a (Tree a) (Tree a)
            deriving (Eq, Show, Read)

insert x Empty = Node x Empty Empty
insert x (Node a left right)
    | x < a = Node a (insert x left) right
    | otherwise = Node a left (insert x right)

lookup x Empty = Nothing --returning maybe type to maintain compatibility with Data.Map
lookup x (Node a left right)
    | x == a = Just x
    | x < a = lookup x left
    | otherwise = lookup x right

an improvement would be to make it autobalancing on insert by maintaining a depth attribute (keeps the tree from degrading into a linked list). This nice thing about this over a hash table is that it only requires your type to be in the typeclass Ord, which is easily derivable for most types.


I take requests it seems. In response to @Jonno_FTWs inquiry here is a solution which completely removes duplicates from the result. It's not entirely dissimilar to the original, simply adding an extra case. However the runtime performance will be much slower since you are going through each sub-list twice, once for the elem, and the second time for the recusion. Also note that now it will not work on infinite lists.

nub [] = []
nub (x:xs) | elem x xs = nub (filter (/=x) xs)
           | otherwise = x : nub xs

Interestingly enough you don't need to filter on the second recursive case because elem has already detected that there are no duplicates.

barkmadley
+1 comprehensive and haskell
Jonno_FTW
cheers Jonno_FTW
barkmadley
On a sidenote, how can you modify `nub` toremove both elements, if they are repeated, ie. `[1,2,2,3] -> [1,3]` ?
Jonno_FTW
Thanks for the help
Jonno_FTW
no problem, anytime
barkmadley
+2  A: 

In Python

>>> L = [2, 1, 4, 3, 5, 1, 2, 1, 1, 6, 5]
>>> a=[]
>>> for i in L:
...   if not i in a:
...     a.append(i)
...
>>> print a
[2, 1, 4, 3, 5, 6]
>>>
it's copy-paste of @FogleBird, isn't?
psihodelia
Only the data L. Can't you see? I am not using sets, just normal list appending.
A: 

One line solution in Python.
Using lists-comprehesion:

>>> L = [2, 1, 4, 3, 5, 1, 2, 1, 1, 6, 5]
>>> M = []
>>> zip(*[(e,M.append(e)) for e in L if not e in M])[0]
(2, 1, 4, 3, 5, 6)
psihodelia
its better if you put it in your original post to say that you found the solution, since the question was asked by you in the first place
`[(M.append(e) or e) for e in L if e not in M]` is less ugly and has the same efficiency (`O(n**2)`) as `'zip'` variant. It is applicable when you can't use `set` or `sort` i.e., almost never.
J.F. Sebastian
Actually `M` contains the result therefore if you must to do it in one line then: `collections.deque((M.append(e) for e in L if e not in M), maxlen=0)`. Here I've used itertools recipe: `consume = lambda it: deque(it, maxlen=0)` It performs iterations until the iterator is exhausted. Final result is in the `M` list. It uses half the memory but time efficiency is the same `O(n**2)`.
J.F. Sebastian
A: 
  • go through the list and assign sequential index to each item
  • sort the list basing on some comparison function for elements
  • remove duplicates
  • sort the list basing on assigned indices

for simplicity indices for items may be stored in something like std::map

looks like O(n*log n) if I haven't missed anything

maxim1000
A: 

Maybe you should look into using associate arrays (aka dict in python) to avoid having duplicate elements in the first place.

prime_number
A: 

It depends on what you mean by "efficently". The naive algorithm is O(n^2), and I assume what you actually mean is that you want something of lower order than that.

As Maxim100 says, you can preserve the order by pairing the list with a series of numbers, use any algorithm you like, and then resort the remainder back into their original order. In Haskell it would look like this:

superNub :: (Ord a) => [a] -> [a]
superNub xs = map snd 
              . sortBy (comparing fst) 
              . map head . groupBy ((==) `on` snd) 
              . sortBy (comparing snd) 
              . zip [1..] $ xs

Of course you need to import Data.List (sort), Data.Function (on) and Data.Ord (comparing). I could just recite the definitions of those functions, but what would be the point?

Paul Johnson
A: 

Delete duplicates in a list inplace in Python

Case: Items in the list are not hashable or comparable

That is we can't use set (dict) or sort.

from itertools import islice

def del_dups2(lst):
    """O(n**2) algorithm, O(1) in memory"""
    pos = 0
    for item in lst:
        if all(item != e for e in islice(lst, pos)):
            # we haven't seen `item` yet
            lst[pos] = item
            pos += 1
    del lst[pos:]

Case: Items are hashable

Solution is taken from here:

def del_dups(seq):
    """O(n) algorithm, O(log(n)) in memory (in theory)."""
    seen = {}
    pos = 0
    for item in seq:
        if item not in seen:
            seen[item] = True
            seq[pos] = item
            pos += 1
    del seq[pos:]

Case: Items are comparable, but not hashable

That is we can use sort. This solution doesn't preserve original order.

def del_dups3(lst):
    """O(n*log(n)) algorithm, O(1) memory"""
    lst.sort()
    it = iter(lst)
    for prev in it: # get the first element 
        break
    pos = 1 # start from the second element
    for item in it: 
        if item != prev: # we haven't seen `item` yet
            lst[pos] = prev = item
            pos += 1
    del lst[pos:]
J.F. Sebastian