views:

2096

answers:

18

An interesting interview question that a colleague of mine uses:

Suppose that you are given a very long, unsorted list of unsigned 64-bit integers. How would you find the smallest non-negative integer that does not occur in the list?

FOLLOW-UP: Now that the obvious solution by sorting has been proposed, can you do it faster than O(n log n)?

FOLLOW-UP: Your algorithm has to run on a computer with, say, 1GB of memory

CLARIFICATION: The list is in RAM, though it might consume a large amount of it. You are given the size of the list, say N, in advance.

+2  A: 

Sort the list, look at the first and second elements, and start going up until there is a gap.

James Black
Depends on how you define, Not in the list.
James Black
Is there more than one way?
PeterAllenWebb
@PeterAllenWebb - There will be, but are the numbers in random order, or sorted?
James Black
+2  A: 

I'd just sort them then run through the sequence until I find a gap (including the gap at the start between zero and the first number).

In terms of an algorithm, something like this would do it:

def smallest_not_in_list(list):
    sort(list)
    if list[0] != 0:
        return 0
    for i = 1 to list.last:
        if list[i] != list[i-1] + 1:
            return list[i-1] + 1
    if list[list.last] == 2^64 - 1:
        assert ("No gaps")
    return list[list.last] + 1

Of course, if you have a lot more memory than CPU grunt, you could create a bitmask of all possible 64-bit values and just set the bits for every number in the list. Then look for the first 0-bit in that bitmask. That turns it into an O(n) operation in terms of time but pretty damned expensive in terms of memory requirements :-)

I doubt you could improve on O(n) since I can't see a way of doing it that doesn't involve looking at each number at least once.

The algorithm for that one would be along the lines of:

def smallest_not_in_list(list):
    bitmask = mask_make(2^64) // might take a while :-)
    mask_clear_all (bitmask)
    for i = 1 to list.last:
        mask_set (bitmask, list[i])
    for i = 0 to 2^64 - 1:
        if mask_is_clear (bitmask, i):
            return i
    assert ("No gaps")
paxdiablo
From the description it seems to preclude 0 to the first element, as it is the smallest not in the list. But, that is an assumption I made, I could be wrong.
James Black
My thoughts were that if the sorted sequence was 4,5,6, then 0 would be the smallest not in the list.
paxdiablo
I expect that 2, 3, 5, the answer should be 4, but, I could be wrong.
James Black
A question that should be answered by the OP. Is the search space "all 64-bit unsigned integers" or "all numbers between the lowest and highest in the list"?
paxdiablo
I agree that in the worst case you have to look at least once, unless it was already sorted in a binary tree perhaps.
James Black
@paxdiablo The question should be taken completely literally. The smallest integer that does not occur in the list. Theoretically the list could contain all 2^64 integers, but let's just say that will never happen since it would require an imposing amount of memory.
PeterAllenWebb
@JamesBlack The answer for your example of [2,3,5] should be 0.
PeterAllenWebb
Then you just sort them. Then if n[0] != 0, the answer is 0. Otherwise the answer is n[i]+1 for the first i where n[i+1] != n[i]+1.
paxdiablo
+8  A: 

Since the numbers are all 64 bits long, we can use radix sort on them, which is O(n). Sort 'em, then scan 'em until you find what you're looking for.

if the smallest number is zero, scan forward until you find a gap. If the smallest number is not zero, the answer is zero.

Barry Brown
True, but the memory requirements could get pretty intense for radix sort.
PeterAllenWebb
Radix sort wont work for very large data sets. But partition and radix sort might work.
+6  A: 

As pointed out in other answers you can do a sort, and then simply scan up until you find a gap.

You can improve the algorithmic complexity to O(N) and keep O(N) space by using a modified QuickSort where you eliminate partitions which are not potential candidates for containing the gap.

  • On the first partition phase, remove duplicates.
  • Once the partitioning is complete look at the number of items in the lower partition
  • Is this value equal to the value used for creating the partition?
    • If so then it implies that the gap is in the higher partition.
      • Continue with the quicksort, ignoring the lower partition
    • Otherwise the gap is in the lower partition
      • Continue with the quicksort, ignoring the higher partition

This saves a large number of computations.

cdiggins
That's pretty nifty. It would assume you can compute the length of the partition in less than linear time, which can be done if that's stored along with the partition array. It also assumes the original list is held in RAM.
Barry Brown
If you know the length of the list, you can also cull any values greater than len(list). By the pigeonhole principle, any 'holes' must be less than len(list).
swillden
I don't think that's O(n)... For one, I'm not sure you can remove duplicates until a list is fully sorted. Secondly, while you can guarantee the throwing away of half the search space each iteration (because you've divided into under and over the midpoint), you *still* have multiple passes (dependent on n) over data that's dependent on n.
paxdiablo
paxdiablo: You can build a new list with only unique values by using a bitmap method like what Stephen C proposed. This runs in O(n) time and space. I'm not sure if it can be done better than that.
Nic
A: 

I am not sure if I got the question. But if for list 1,2,3,5,6 and the missing number is 4, then the missing number can be found in O(n) by: (n+2)(n+1)/2-(n+1)n/2

EDIT: sorry, I guess I was thinking too fast last night. Anyway, The second part should actually be replaced by sum(list), which is where O(n) comes. The formula reveals the idea behind it: for n sequential integers, the sum should be (n+1)*n/2. If there is a missing number, the sum would be equal to the sum of (n+1) sequential integers minus the missing number.

Thanks for pointing out the fact that I was putting some middle pieces in my mind.

Codism
I do not, at first glance see how this would work. In your case n=5 and the formulera will be fixed, no matter what number in it was missing.
Simon Svensson
Simon: could you now please remove the down vote according to my edit?
Codism
+38  A: 

Here's a simple O(N) solution that uses O(N) space. I'm assuming that we are restricting the input list to non-negative numbers and that we want to find the first non-negative number that is not in the list.

  1. Find the length of the list; lets say it is N.
  2. Allocate an array of N booleans, initialized to all false.
  3. For each number X in the list, if X is less than N, set the X'th element of the array to true.
  4. Scan the array starting from index 0, looking for the first element that is false. If you find the first false at index I, then I is the answer. Otherwise (i.e. when all elements are true) the answer is N.

In practice, the "array of N booleans" would probably be encoded as a "bitmap" or "bitset" represented as a byte or int array. This typically uses less space (depending on the programming language) and allows the scan for the first false to be done more quickly.


This is how / why the algorithm works.

Suppose that the N numbers in the list are not distinct, or that one or more of them is greater than N. This means that there must be at least one number in the range 0 .. N - 1 that is not in the list. So the problem of find the smallest missing number must therefore reduce to the problem of finding the smallest missing number less than N. This means that we don't need to keep track of numbers that are greater or equal to N ... because they won't be the answer.

The alternative to the previous paragraph is that the list is a permutation of the numbers from 0 .. N - 1. In this case, step 3 sets all elements of the array to true, and step 4 tells us that the first "missing" number is N.


The computational complexity of the algorithm is O(N) with a relatively small constant of proportionality. It makes two linear passes through the list, or just one pass if the list length is known to start with. There is no need to represent the hold the entire list in memory, so the algorithm's asymptotic memory usage is just what is needed to represent the array of booleans; i.e. O(N) bits.

(By contrast, algorithms that rely on in-memory sorting or partitioning assume that you can represent the entire list in memory. In the form the question was asked, this would require O(N) 64-bit words.)


@Jorn comments that steps 1 through 3 are a variation on counting sort. In a sense he is right, but the differences are significant:

  • A counting sort requires an array of (at least) Xmin - Xmin counters where Xmax is the largest number in the list and Xmin is the smallest number in the list. Each counter has to be able to represent N states; i.e. assuming a binary representation it has to have an integer type (at least) ceiling(log2(N)) bits.
  • To determine the array size, a counting sort needs to make an initial pass through the list to determine Xmax and Xmin.
  • The minimum worst-case space requirement is therefore ceiling(log2(N)) * (Xmax - Xmin) bits.

By contrast, the algorithm presented above simply requires N bits in the worst and best cases.

However, this analysis leads to the intuition that if the algorithm made an initial pass through the list looking for a zero (and counting the list elements if required), it would give a quicker answer using no space at all if it found the zero. It is definitely worth doing this if there is a high probability of finding at least one zero in the list. And this extra pass doesn't change the overall complexity.


EDIT: I've changed the description of the algorithm to use "array of booleans" since people apparently found my original description using bits and bitmaps to be confusing.

Stephen C
This is the best answer I know of. I think you could probably cut down the memory consumption, but it would ideal in terms of speed.
PeterAllenWebb
You beat me to it. This is what I was going to suggest. Here's why it works: By the pigeonhole principle, any gap must be less than N. So, we can discard any values greater than N. For the values less than N, we have a flag for each value. Scan the list and set the flags, then scan the flags to find a hole.
swillden
Stephen, please provide the "Why this works" bit. SO is supposed to be a place for answers, not puzzles.
paxdiablo
i don't understand.. if N is a really large number, step 3 might give me a bitmap with all bits set to 1.. because the list is unsorted, I don't understand how step 4 gives me the right answer
adi92
@adi92 If step 3 gives you a bitmap with all bits set to 1, then the list contains every value from 0 to N-1. That means the smallest non-negative integer in the list is N. If there's any value between 0 and N-1 that is NOT in the list, then the corresponding bit will not be set. The smallest such value is therefore the answer.
swillden
wait, where in the question specification does it mention that there cannot be duplicates?doesnt this logic work only if there are N distinct numbers?
adi92
i mean if the list is [1,2,3,1,2,3,1,2,3 .. 100 times], the answer is not 100
adi92
@adi92 In your example, the list would contain 300 elements. That means that if there is any "missing" value, it must be less than 300. Running the algorithm, we'd create a bitfield with 300 slots, then repeatedly set the bits in slots 1, 2, and 3, leaving all of the other slots -- 0 and 4 through 299 -- clear. When scanning the bitfield we'd find the flag in slot 0 clear, so we'd know 0 is the answer.
swillden
Note that this algorithm might be more simply understood without the bit twiddling: "Create a Boolean array of size N" etc. Once you understand it that way, moving to a bitwise version is conceptually easy.
Jon Skeet
Turning my -1 into a +1 now that you've explained it. Pretty clever, it's actually the same as my bitmap solution but without the 2.5 exabyte memory requirement :-) - I had to make a minor edit to allow the vote reversal.
paxdiablo
Conceptually boolean array and bitmap are the same thing.
starblue
When giving an abstract solution, use the conceptually simplest way that works, and don't overly specialize. Your solution screams for the use of an (abstract) boolean array, so call it that. That you might implement this array by `bool[]` or by a bitmap is irrelevant to the general solution.
Joren
I think this solution might be best described by "Use a counting sort that disregards elements above N, then find the first missing element by doing a linear search from the start."
Joren
To me, bitmap == 2d image. so this was a bit confusing... but it makes total sense now that I understand that is a 1d bool array :p
Svish
Your comments on Counting Sort are correct and valid, but I didn't say 'that disregards elements above N' for nothing. :) Xmax becomes N, and with Xmin set to 0 (which I forgot isn't standard counting sort) the issues are solved. Anyway, I like your elaboration and your answer had always been the best.
Joren
+8  A: 

Since the OP has now specified that the original list is held in RAM and that the computer has only, say, 1GB of memory, I'm going to go out on a limb and predict that the answer is zero.

1GB of RAM means the list can have at most 134,217,728 numbers in it. But there are 264 = 18,446,744,073,709,551,616 possible numbers. So the probability that zero is in the list is 1 in 137,438,953,472.

In contrast, my odds of being struck by lightning this year are 1 in 700,000. And my odds of getting hit by a meteorite are about 1 in 10 trillion. So I'm about ten times more likely to be written up in a scientific journal due to my untimely death by a celestial object than the answer not being zero.

Barry Brown
Your calculation only holds if the values are uniformly distributed and selected at random. They could just as well have been generated sequentially.
swillden
You're correct, of course. But I'm all about optimizing for the common case. :)
Barry Brown
So, what are the odds of interviewee getting selected with this answer?
Amarghosh
The question does not say the numbers are selected uniformly at random. They are selected by the person setting this question. Given this, the probability of 0 being in the list is *much* larger than 1 in 137,438,953,472, probably even larger than 1 in 2. :-)
ShreevatsaR
@Amarghosh The answer to that question is also zero.
PeterAllenWebb
+4  A: 

To illustrate one of the pitfalls of O(N) thinking, here is an O(N) algorithm that uses O(1) space.

for i in [0..2^64):
  if i not in list: return i

print "no 64-bit integers are missing"
I. J. Kennedy
Breaking out the the loop early makes the runtime O(n^2).
Will Harris
Will is right. This is not O(n) because you actually have two loops here, but one is implicit. Determining whether a value is in a list is an O(n) operation, and you're doing that n times in your for loop. That makes it O(n^2).
Nic
Nic, Will, it's O(n * N) where n is the size of the list and N is the size of the domain (64bit integers). While N is a huge number, it is still a constant so formally the complexity for the problem as stated is O(n).
Ants Aasma
Ants, I agree that it's O(n*N), but N is not constant. Because the algorithm finishes when it finds the answer, the number of complete iterations through the outer loop is equal to the answer, which itself is bound by the size of the list. So, O(N*n) is O(n^2) in this case.
Will Harris
Looking for a number in a list of N elements is clearly O(N). We do this 2^64 times. While large, 2^64 is a CONSTANT. Therefore the algorithm is C*O(N), which is still O(N).
I. J. Kennedy
I must recant my previous statement; by the strictest definition, this operation is indeed O(n).
Nic
A: 

You can do it in O(n) time and O(1) additional space, although the hidden factor is quite large. This isn't a practical way to solve the problem, but it might be interesting nonetheless.

For every unsigned 64-bit integer (in ascending order) iterate over the list until you find the target integer or you reach the end of the list. If you reach the end of the list, the target integer is the smallest integer not in the list. If you reach the end of the 64-bit integers, every 64-bit integer is in the list.

Here it is as a Python function:

def smallest_missing_uint64(source_list):
    the_answer = None

    target = 0L
    while target < 2L**64:

        target_found = False
        for item in source_list:
            if item == target:
                target_found = True

        if not target_found and the_answer is None:
            the_answer = target

        target += 1L

    return the_answer

This function is deliberately inefficient to keep it O(n). Note especially that the function keeps checking target integers even after the answer has been found. If the function returned as soon as the answer was found, the number of times the outer loop ran would be bound by the size of the answer, which is bound by n. That change would make the run time O(n^2), even though it would be a lot faster.

Will Harris
True. It amusing how horribly some of the algorithms that are O(1) space and O(n) time fail in practice with this question.
PeterAllenWebb
+4  A: 

For a space efficient method and all values are distinct you can do it in space O( k ) and time O( k*log(N)*N ). It's space efficient and there's no data moving and all operations are elementary (adding subtracting).

  1. set U = N; L=0
  2. First partition the number space in k regions. Like this:
    • 0->(1/k)*(U-L) + L, 0->(2/k)*(U-L) + L, 0->(3/k)*(U-L) + L ... 0->(U-L) + L
  3. Find how many numbers (count{i}) are in each region. (N*k steps)
  4. Find the first region (h) that isn't full. That means count{h} < upper_limit{h}. (k steps)
  5. if h - count{h-1} = 1 you've got your answer
  6. set U = count{h}; L = count{h-1}
  7. goto 2

this can be improved using hashing (thanks for Nic this idea).

  1. same
  2. First partition the number space in k regions. Like this:
    • `L + (i/k)->L + (i+1/k)*(U-L)'
  3. inc count{j} using j = (number - L)/k (if L < number < U)
  4. find first region (h) that doesn't have k elements in it
  5. if count{h} = 1 h is your answer
  6. set U = maximum value in region h L = minimum value in region h

This will run in O(log(N)*N).

egon
I really like this answer. It was a bit hard to read, but it's very similar to what I had in my head when I read the question.
Nic
also at some point it would be smart to switch to that bitmap solution by Stephen C. probably when `U-L < k`
egon
A: 

Thanks to egon, swilden, and Stephen C for my inspiration. First, we know the bounds of the goal value because it cannot be greater than the size of the list. Also, a 1GB list could contain at most 134217728 (128 * 2^20) 64-bit integers.

Hashing part
I propose using hashing to dramatically reduce our search space. First, square root the size of the list. For a 1GB list, that's N=11,586. Set up an integer array of size N. Iterate through the list, and take the square root* of each number you find as your hash. In your hash table, increment the counter for that hash. Next, iterate through your hash table. The first bucket you find that is not equal to it's max size defines your new search space.

Bitmap part
Now set up a regular bit map equal to the size of your new search space, and again iterate through the source list, filling out the bitmap as you find each number in your search space. When you're done, the first unset bit in your bitmap will give you your answer.

This will be completed in O(n) time and O(sqrt(n)) space.

(*You could use use something like bit shifting to do this a lot more efficiently, and just vary the number and size of buckets accordingly.)

Nic
I like the idea of dividing the search space into Root-N buckets to reduce memory footprint, but duplicates in the list would break this method. I do wonder if it can be fixed.
PeterAllenWebb
You're right, I neglected to consider duplicate entries. I'm not sure that can be worked around.
Nic
A: 

How would you find the smallest non-negative integer that does not occur in the list?

That should be easy, and in O(1), the answer is 0 or the question cannot be answered.

If 0 is included in the list, no non-negative integer smaller extists. If 0 is not included in the list, it must be the smallest non-negative integer :-)

rsp
Hah! I was thinking along those lines in the beginning too, but I don't think that's what was intended in the question :). Besides, to find the smallest non-negative integer (if not 0), the algorithm would have to iterate over the list a maximum of n times, which makes the algorithm O(n) (unless I'm missing something)
Chinmay Kanchi
In that case the question should be improved so that it makes clear what is wanted, for instance: "the smallest integer not in the list while delimeted by values in the list" the "non-negative" part is superfluous in this case.
rsp
No. The question is looking for the first non-negative integer that doesn't make an appearance in a given array. If 0 is in the array, but 1 is nowhere in the array then the answer is 1.
Paul Hsieh
A: 

Well if there is only one missing number in a list of numbers, the easiest way to find the missing number is to sum the series and subtract each value in the list. The final value is the missing number.

Jeff Lundstrom
Yeah. That is another classic interview question.
PeterAllenWebb
A: 

As Stephen C smartly pointed out, the answer must be a number smaller than the length of the array. I would then find the answer by binary search. This optimizes the worst case (so the interviewer can't catch you in a 'what if' pathological scenario). In an interview, do point out you are doing this to optimize for the worst case.

The way to use binary search is to subtract the number you are looking for from each element of the array, and check for negative results.

Emilio M Bumachar
A: 

I like the "guess zero" apprach. If the numbers were random, zero is highly probable. If the "examiner" set a non-random list, then add one and guess again:

LowNum=0
i=0
do forever {
  if i == N then leave /* Processed entire array */
  if array[i] == LowNum {
     LowNum++
     i=0
     }
   else {
     i++
   }
}
display LowNum

The worst case is n*N with n=N, but in practice n is highly likely to be a small number (eg. 1)

NealB
+18  A: 

If the datastructure can be mutated in place and supports random access then you can do it in O(N) time and O(1) additional space. Just go through the array sequentially and for every index write the value at the index to the index specified by value, recursively placing any value at that location to its place and throwing away values > N. Then go again through the array looking for the spot where value doesn't match the index - that's the smallest value not in the array. This results in at most 3N comparisons and only uses a few values worth of temporary space.

# Pass 1, move every value to the position of its value
for cursor in range(N):
    target = array[cursor]
    while target < N and target != array[target]:
        new_target = array[target]
        array[target] = target
        target = new_target

# Pass 2, find first location where the index doesn't match the value
for cursor in range(N):
    if array[cursor] != cursor:
        return cursor
return N
Ants Aasma
That's what I call an answer.
PeterAllenWebb
Small nitpick. You've missed a trivial case: when the list is {0, ..., N-1}. In that case pass 1 does nothing and in pass 2 array[cursor] == cursor for all the entries in the list, so the algorithm does not return. So you need a 'return N' statement at the end.
Alex
Alex: good catch, thanks.
Ants Aasma
A: 

Well done Ants Aasma! I thought about the answer for about 15 minutes and independently came up with an answer in a similar vein of thinking to yours:

#define SWAP(x,y) { numerictype_t tmp = x; x = y; y = tmp; }
int minNonNegativeNotInArr (numerictype_t * a, size_t n) {
    int m = n;
    for (int i = 0; i < m;) {
        if (a[i] >= m || a[i] < i || a[i] == a[a[i]]) {
            m--;
            SWAP (a[i], a[m]);
            continue;
        }
        if (a[i] > i) {
            SWAP (a[i], a[a[i]]);
            continue;
        }
        i++;
    }
    return m;
}

m represents "the current maximum possible output given what I know about the first i inputs and assuming nothing else about the values until the entry at m-1".

This value of m will be returned only if (a[i], ..., a[m-1]) is a permutation of the values (i, ..., m-1). Thus if a[i] >= m or if a[i] < i or if a[i] == a[a[i]] we know that m is the wrong output and must be at least one element lower. So decrementing m and swapping a[i] with the a[m] we can recurse.

If this is not true but a[i] > i then knowing that a[i] != a[a[i]] we know that swapping a[i] with a[a[i]] will increase the number of elements in their own place.

Otherwise a[i] must be equal to i in which case we can increment i knowing that all the values of up to and including this index are equal to their index.

The proof that this cannot enter an infinite loop is left as an exercise to the reader. :)

Paul Hsieh
A: 

What about using a hash table to hold the numbers? Once all numbers are done, run a counter from 0 till we find the lowest... A reasonably good hash will hash and store in constant time, and retrieves in constant time.

for every i in X          // One scan Θ(1)
    hashtable.put(i, i);    // O(1)

low = 0;

while (hashtable.get(i) <> null)    // utmost n+1 times
    low ++;

print low;

The worst case if there are n elements in the array, and are {0, 1, ... n-1} In which case, the answer will be obtained at n, still keeping O(n).

Chandra
Please read asA reasonably good hash will hash in O(1).
Chandra