views:

355

answers:

6

Can anyone tell me order of complexity of below algorithm? This algorithm is to do following:

Given an unsorted array of integers with duplicate numbers, write the most efficient code to print out unique values in the array.

I would also like to know What are some pros and cons in the context of hardware usage of this implementation

 private static void IsArrayDuplicated(int[] a)
    {
        int size = a.Length;
        BitArray b = new BitArray(a.Max()+1);
        for ( int i = 0; i < size; i++)
        {
            b.Set(a[i], true);
        }

        for (int i = 0; i < b.Count; i++)
        {
            if (b.Get(i))
            {

                System.Console.WriteLine(i.ToString());

            }
        }
        Console.ReadLine();
    }
+4  A: 

You have two for loops, one of length a.Length and one of length (if I understand the code correctly) a.Max() + 1. So your algorithmic complexity is O(a.Length + a.Max())

pavpanchekha
It's funny how the correct answer didn't get the deserved upvotes. :-(
phoku
A: 

You have two loops, each based on the size of n. I agree with whaley, but that should give you a good start on it.

Jeff Ober
+4  A: 

The complexity of the algorithm linear.

Finding the maximum is linear. Setting the bits is linear.

However the algorithm is also wrong, unless your integers can be assumed to be positive.

It also has a problem with large integers - do you really want to allocate MAX_INT/8 bytes of memory?

The name, btw, makes me cringe. IsXYZ() should always return a bool.

I'd say, try again.

Correction - pavpanchekha has the correct answer.

phoku
A: 

O(n) on a.length

Matt Joiner
A: 

Complexity of your algorithm is O(N), but algorithm is not correct.

  1. If numbers are negative it will not work
  2. In case of large numbers you will have problems with memory

I suggest you to use this approach:

private static void IsArrayDuplicated(int[] a) {
    int size = a.length;
    Set<Integer> b = new HashSet<Integer>();
    for (int i = 0; i < size; i++) {
        b.add(a[i]);
    }

    Integer[] T = b.toArray(new Integer[0]);

    for (int i = 0; i < T.length; i++) {
        System.out.println(T[i]);
    }

}
rachvela
+1  A: 

O(n) is probably only possible for a finite/small domain of integers. Everyone think about bucketsort. The Hashmap approach is basically not O(n) but O(n^2) since worst-case insertion into a hashmap is O(n) and NOT constant.

How about sorting the list in O(n*log(n)) and then going through it and print the duplicate values. This results in O(n*log(n)) which is probably the true complexity of the problem.

ziggystar