views:

4958

answers:

9

I have an Array of Objects that need the duplicates removed/filtered. I was going to just override equals & hachCode on the Object elements, and then stick them in a Set... but I figured I should at least poll stackoverflow to see if there was another way, perhaps some clever method of some other API?

+1  A: 

that is indeed the best answer (put them in a Set).

james
+12  A: 

I would agree with your approach to override hashCode() and equals() and use something that implements Set.

Doing so also makes it absolutely clear to any other developers that the non-duplicate characteristic is required.

Another reason - you get to choose an implementation that meets your needs best now:

and you don't have to change your code to change the implementation in the future.

Brabster
A: 

A Set is definitely your best bet. The only way to remove things from an array (without creating a new one) is to null them out, and then you end up with a lot of null-checks later.

Michael Myers
+3  A: 

Overriding equals and hashCode and creating a set was my first thought too. It's good practice to have some overridden version of these methods anyway in your inheritance hierarchy.

I think that if you use a LinkedHashSet you'll even preserve order of unique elements...

Dan Vinton
Yep, `LinkedHashSet` will maintain the insertion order.
Ken Gentle
It is not good practise to override equals and hashCode "anyway", especially in any class that will sit in an inheritance hierarchy. See Effective Java (Bloch) for more.
McDowell
McDowell, I wan't really clear - I meant that there should be an overridden version *somewhere* in your inheritance hierarchy, and I've amended the answer to reflect that. I've not got a copy of Effective Java - is this what Bloch is getting at?
Dan Vinton
+3  A: 

I found this in the web

Here are two methods that allow you to remove duplicates in an ArrayList. removeDuplicate does not maintain the order where as removeDuplicateWithOrder maintains the order with some performance overhead.

  1. The removeDuplicate Method:

    /** List order not maintained **/
    public static void removeDuplicate(ArrayList arlList)
    {
     HashSet h = new HashSet(arlList);
     arlList.clear();
     arlList.addAll(h);
    }
    
  2. The removeDuplicateWithOrder Method:

    /** List order maintained **/
    public static void removeDuplicateWithOrder(ArrayList arlList)
    {
       Set set = new HashSet();
       List newList = new ArrayList();
       for (Iterator iter = arlList.iterator(); iter.hasNext();) {
          Object element = iter.next();
          if (set.add(element))
             newList.add(element);
       }
       arlList.clear();
       arlList.addAll(newList);
    }
    
Markus Lausberg
Good answer, but your 2nd example isn't in a code format block
TravisO
thanks to Ken G... i tried it a couple of times but i could ot fix the second code blog
Markus Lausberg
LinkedHashSet keeps it in order. That may optimize it further.
Daddy Warbox
A: 

Speaking from a general programming standard you could always double enumerate the collections then the compare the source and target.

And if your inner enumeration always starts one entry after the source, it's fairly efficient (pseudo code to follow)

foreach ( array as source )
{
    // keep track where we are in the array
    place++;
    // loop the array starting at the entry AFTER the current one we are comparing to
    for ( i=place+1; i < max(array); i++ )
    {
     if ( source === array[place] )
     {
      destroy(array[i]);
     }
    }
}

You could arguably add a break; statement after the destroy but then you only discover the first duplicate, but if that's all you will ever have, then it would be a nice small optimization.

TravisO
+1  A: 

I'd like to reiterate the point made by Jason in the comments:

Why place yourself at that point at all?

Why use an array for a data structure that shouldn't hold duplicates at all?

Use a Set or a SortedSet (when the elements have a natural order as well) at all times to hold the elements. If you need to keep the insertion order, then you can use the LinkedHashSet as it has been pointed out.

Having to post-process some data structure is often a hint that you should have choosen a different one to begin with.

Joachim Sauer
I agree with all comments about the concerns of the initial data structure being an Array. I am trying to lobby the developer to refactor to a Set. Thanks all for your feedback and wisdom!
Liggy
+1  A: 

Of course the original post begs the question, "How did you get that array (that might contain duplicated entries) in the first place?"

Do you need the array (with duplicates) for other purposes, or could you simply use a Set from the beginning?

Alternately, if you need to know the number of occurrences of each value, you could use a Map<CustomObject, Integer> to track counts. Also, the Google Collections definition of the Multimap classes may be of use.

joel.neely
+1  A: 

Basically, you want a LinkedHashSet<T> implementation that supports the List<T> interface for random access. Hence, this is what you need:

public class LinkedHashSetList<T> extends LinkedHashSet<T> implements List<T> {

// Implementations for List<T> methods here ...

}

The implementation of the List<T> methods would access and manipulate the underlying LinkedHashSet<T>. The trick is to have this class behave correctly when one attempts to add duplicates via the List<T> add methods (throwing an exception or re-adding the item at a different index would be options: which you can either choose one of or make configurable by users of the class).

Ryan Delucchi
This is what I suggest, too.
Daddy Warbox