views:

2601

answers:

13

I have been noticing some very strange usage of O(1) in discussion of algorithms involving hashing and types of search, often in the context of using a dictionary type provided by the language system, or using dictionary or hash-array types used using array-index notation.

Basically, O(1) means bounded by a constant time and (typically) fixed space. Some pretty fundamental operations are O(1), although using intermediate languages and special VMs tends to distort ones thinking here (e.g., how does one amortize the garbage collector and other dynamic processes over what would otherwise be O(1) activities).

But ignoring amortization of latencies, garbage-collection, and so on, I still don't understand how the leap to assumption that certain techniques that involve some kind of searching can be O(1) except under very special conditions.

Although I have noticed this before, an example just showed up in the Pandincus question, "'Proper’ collection to use to obtain items in O(1) time in C# .NET?".

As I remarked there, the only collection I know of that provides O(1) access as a guaranteed bound is a fixed-bound array with an integer index value. The presumption is that the array is implemented by some mapping to random access memory that uses O(1) operations to locate the cell having that index.

For collections that involve some sort of searching to determine the location of a matching cell for a different kind of index (or for a sparse array with integer index), life is not so easy. In particular, if there are collisons and congestion is possible, access is not exactly O(1). And if the collection is flexible, one must recognize and amortize the cost of expanding the underlying structure (such as a tree or a hash table) for which congestion relief (e.g., high collision incidence or tree imbalance).

I would never have thought to speak of these flexible and dynamic structures as O(1). Yet I see them offered up as O(1) solutions without any identification of the conditions that must be maintained to actually have O(1) access be assured (as well as have that constant be negligibly small).

THE QUESTION: All of this preparation is really for a question. What is the casualness around O(1) and why is it accepted so blindly? Is it recognized that even O(1) can be undesirably large, even though near-constant? Or is O(1) simply the appropriation of a computational-complexity notion to informal use? I'm puzzled.

UPDATE: The Answers and comments point out where I was casual about defining O(1) myself, and I have repaired that. I am still looking for good answers, and some of the comment threads are rather more interesting than their answers, in a few cases.

+2  A: 

HashTable looks-ups are O(1) with respect to the number of items in the table, because no matter how many items you add to the list the cost of hashing a single item is pretty much the same, and creating the hash will tell you the address of the item.


To answer why this is relevant: the OP asked about why O(1) seemed to be thrown around so casually when in his mind it obviously could not apply in many circumstances. This answer explains that O(1) time really is possible in those circumstances.

Joel Coehoorn
Only if it's a perfect hash and no two items generate the same hash - then it becomes O(some function of n).
paxdiablo
Also, that's only if your hash key maps to some key in some index in memory. Otherwise you have to have a lookup table of some sort to figure out where the hashed element resides.
Kibbee
HashTable's approach but are not guaranteed to reach O(1)
JaredPar
@Kibbee, that's's still O(1) if the hash function itself is O(1) since the table lookup is also not dependent on the data size - what I mean is it's O(1) for a slightly bigger "1" :-)
paxdiablo
Close enough ... +1 :)
Timothy Khouri
@Tim, did you read the questions at all? This answer appears to be for a totally different question :-)
paxdiablo
@Pax - a perfect hash will increase the cost of checking that an item is *not* in a hash table. Hash tables that increase bucket count to maintain a constant fill ratio will amortize to O(1) on average, not O(some function of n).
Barry Kelly
@Barry, I may have been unclear - a perfect hash will guarantee no collisions - if you have a 3-character unique 'key' containing 'A' - 'Z' characters, a perfect hash will turn this into an int from 1 thru 26^3 and use that as an index; that's definitely O(1).
paxdiablo
And, if you increase the bucket count, the hash function has to change to take that into account. Re-calc-ing all the hashes in that case is O(n). If you don't change the hash function, then you have to resort to chaining which again becomes O(n).
paxdiablo
Basically, hash tables approach O(1) if you can control the input data and optimize the hashing function for it.
paxdiablo
Resizing is done on insertion, not look-up, so recalculating the hashes doesn't count against the amortization of the look-up time.
Bill the Lizard
Nice discussion. I would object to confusing the possibility of (small) constant access time with actuality but the comments are so useful that I almost want to up-vote the response!
orcmid
A: 

I think when many people throw around the term "O(1)" they implicitly have in mind a "small" constant, whatever "small" means in their context.

You have to take all this big-O analysis with context and common sense. It can be an extremely useful tool or it can be ridiculous, depending on how you use it.

John D. Cook
+27  A: 

My understanding is that O(1) is not necessarily constant; rather, it is not dependent on the variables under consideration. Thus a hash lookup can be said to be O(1) with respect to the number of elements in the hash, but not with respect to the length of the data being hashed or ratio of elements to buckets in the hash.

The other element of confusion is that big O notation describes limiting behavior. Thus, a function f(N) for small values of N may indeed show great variation, but you would still be correct to say it is O(1) if the limit as N approaches infinity is constant with respect to N.

ysth
"O(1) is not necessarily constant; rather, it is not dependent on the variables under consideration" that's probably the best truism I've heard in a while.
Timothy Khouri
Yet it is, strictly speaking, wrong. O(1) means *bounded*, not constant. The dependency is still there, and you are only assured that T(n) < M for some (maybe big) constant M. T(n) may also be strictly increasing with n and still O(1).
Federico Ramponi
@Federico Ramponi: that's what I was trying to say in less technical terms. "strictly increasing" is practically speaking irrelevant if it's bounded.
ysth
@Timothy Khouri: The point ysth is making is to be careful about specifying which variables are "under consideration". One common interpretation of "constant" is "constant w.r.t. all possible variables" which would include, among other things, clock speed. Clearly that's something we would want to *exclude* here.
j_random_hacker
+11  A: 

I can see what you're saying, but I think there are a couple of basic assumptions underlying the claim that look-ups in a Hash table have a complexity of O(1).

  • The hash function is reasonably designed to avoid a large number of collisions.
  • The set of keys is pretty much randomly distributed, or at least not purposely designed to make the hash function perform poorly.

The worst case complexity of a Hash table look-up is O(n), but that's extremely unlikely given the above 2 assumptions.

Bill the Lizard
That would only occur in the case where you had a *really* awful hash function ( ie: function hash( $i ){ return 1; } ) and all keys resolved to the same hash, or you set the 'grow' threshold at "when theres only 1 free space" to grow by only 1 free space. Nobody does this.
Kent Fredric
The hash function you describe is the worst case. The best case is a perfect hash function that never collides. Reality is somewhere in between, but closer to perfection than the worst case, which is why we can say O(1).
Bill the Lizard
I hear the claim, but I don't hear the justification for it. It is *not* why you can say O(1) if you are talking complexity. It may be why people say O(1) as an off-hand claim. How often does anyone test the claim to make sure the performance is truly acceptable?
orcmid
It's pretty easy to test. You only have to test your hash functions to make sure there are very few collisions in your expected range of inputs. I do it for every object I'm going to store in a hash.
Bill the Lizard
Not "usually" being worst case O(n) doesn't make it O(1). The algorithm should be measured by it's performance's upper bound, i.e. worst case. Unless you want to define a few extra rules for the hash implementation, but even then i say it's only O(n) for small n.
Stroboskop
Ech, the last one should read O(1) of course.
Stroboskop
There are hashing algorithms that guarantee better-than-linear worst case. Cuckoo hashing is O(1) for lookups. Heck, even using a balanced binary tree instead of a linked list for collisions would guarantee O(log N), but I don't know if anyone actually does that.
Tom
If you're only allowed to use the upper bound, then Quicksort is O(n^2). Technically true, perhaps, but not nearly as useful as saying it's O(log n) with a worst case of O(n^2). Hash tables encounter their worst case even less frequently than quicksort, so it's even less of an issue.
Nick Johnson
+2  A: 

In general, I think people use them comparatively without regard to exactness. For example, hash-based data structures are O(1) (average) look up if designed well and you have a good hash. If everything hashes to a single bucket, then it's O(n). Generally, though one uses a good algorithm and the keys are reasonably distributed so it's convenient to talk about it as O(1) without all the qualifications. Likewise with lists, trees, etc. We have in mind certain implementations and it's simply more convenient to talk about them, when discussing generalities, without the qualifications. If, on the other hand, we're discussing specific implementations, then it probably pays to be more precise.

tvanfosson
A: 
Kent Fredric
Kent, an average of 1.5 lookups *is* O(1), because the number of lookups isn't varying with the n in question (table size). There is no O(2).
Barry Kelly
I clarified that Barry, sorry, trying to use "laypersonthink" too much
Kent Fredric
Basically, all O(1) signifies is any member of the set f(n)=k (or the set of all functions, f<sub>k</sub>(n)=k). k may be neither small or very constant. With all the talk about perfect hashes, I don't see people constructing them or even determining if the situation allows one.
orcmid
@orcmid, I think people are realsist, realise they are implusible, and don't try. the most perfect hash is the data itself, and thats hardly optimal.
Kent Fredric
@Kent - perfect hashes can be quite useful. There are many cases where data is static, but must be looked up by an arbitrary key. In these cases, you can rehash/resize after loading all data until you have few/no collisions.
Tom
+6  A: 

Hashtables is a data structure that supports O(1) search and insertion.

A hashtable usually has a key and value pair, where the key is used to as the parameter to a function (a hash function) which will determine the location of the value in its internal data structure, usually an array.

As insertion and search only depends upon the result of the hash function and not on the size of the hashtable nor the number of elements stored, a hashtable has O(1) insertion and search.

There is one caveat, however. That is, as the hashtable becomes more and more full, there will be hash collisions where the hash function will return an element of an array which is already occupied. This will necesitate a collision resolution in order to find another empty element.

When a hash collision occurs, a search or insertion cannot be performed in O(1) time. However, good collision resolution algorithms can reduce the number of tries to find another suiteable empty spot or increasing the hashtable size can reduce the number of collisions in the first place.

So, in theory, only a hashtable backed by an array with an infinite number of elements and a perfect hash function would be able to achieve O(1) performance, as that is the only way to avoid hash collisions that drive up the number of required operations. Therefore, for any finite-sized array will at one time or another be less than O(1) due to hash collisions.


Let's take a look at an example. Let's use a hashtable to store the following (key, value) pairs:

  • (Name, Bob)
  • (Occupation, Student)
  • (Location, Earth)

We will implement the hashtable back-end with an array of 100 elements.

The key will be used to determine an element of the array to store the (key, value) pair. In order to determine the element, the hash_function will be used:

  • hash_function("Name") returns 18
  • hash_function("Occupation") returns 32
  • hash_function("Location") returns 74.

From the above result, we'll assign the (key, value) pairs into the elements of the array.

array[18] = ("Name", "Bob")
array[32] = ("Occupation", "Student")
array[74] = ("Location", "Earth")

The insertion only requires the use of a hash function, and does not depend on the size of the hashtable nor its elements, so it can be performed in O(1) time.

Similarly, searching for an element uses the hash function.

If we want to look up the key "Name", we'll perform a hash_function("Name") to find out which element in the array the desired value resides.

Also, searching does not depend on the size of the hashtable nor the number of elements stored, therefore an O(1) operation.

All is well. Let's try to add an additional entry of ("Pet", "Dog"). However, there is a problem, as hash_function("Pet") returns 18, which is the same hash for the "Name" key.

Therefore, we'll need to resolve this hash collision. Let's suppose that the hash collision resolving function we used found that the new empty element is 29:

array[29] = ("Pet", "Dog")

Since there was a hash collision in this insersion, our performance was not quite O(1).

This problem will also crop up when we try to search for the "Pet" key, as trying to find the element containing the "Pet" key by performing hash_function("Pet") will always return 18 initially.

Once we look up element 18, we'll find the key "Name" rather than "Pet". When we find this inconsistency, we'll need to resolve the collision in order to retrieve the correct element which contains the actual "Pet" key. Resovling a hash collision is an additional operation which makes the hashtable not perform at O(1) time.

coobird
On first look, I liked this as an explanation, except that the caveat was at the end. Before you start making those bold-faced claims, I wish the caveat was up front.On second look, the example using hash_function is bogus: no stored keys to detect collision, inadequate treatment of name+value.
orcmid
The example strikes me as so misleading that I will vote this answer down unless you repair it or it provokes enough comments to be an useful object of discussion. I don't want to down-vote it.
orcmid
I made some changes to the answer on your comments about the caveats. Hopefully this is better than the previous version.
coobird
Progress. You're not showing how the keys are stored as well so that you can tell you have a collision. Also, you might as well assume linear overflow to keep things simple. Otherwise we don't know where the second hash came from for the collision.
orcmid
Ah, thank you for pointing out the error. That was a grave oversight!
coobird
OK, looking good. You might say something about how you overflowed from 18 to 29, or simply use linear overflow and use 19. That makes it easier to describe and you could point out that there are more-complex overflow techniques. Thanks for playing along. The narrative is much improved.
orcmid
Of course, if you are using a standard library implementation of hash table, it will auto-expand such that collisions are very rare unless you go out of your way to pick data that has the same hash code.
Kip
+16  A: 

O(1) means constant time and (typically) fixed space

Just to clarify these are two separate statements. You can have O(1) in time but O(n) in space or whatever.

Is it recognized that even O(1) can be undesirably large, even though near-constant?

O(1) can be impractically HUGE and it's still O(1). It is often neglected that if you know you'll have a very small data set the constant is more important than the complexity, and for reasonably small data sets, it's a balance of the two. An O(n!) algorithm can out-perform a O(1) if the constants and sizes of the data sets are of the appropriate scale.

O() notation is a measure of the complexity - not the time an algorithm will take, or a pure measure of how "good" a given algorithm is for a given purpose.

Draemon
Hmm, I suppose that is true about the space, if we assume n is the number of items in the collection. Actually, it is probably not useful to mention space in this case. Is it worth adjusting the question to clean that up?
orcmid
You put your finger on another aspect, and that is knowing what the relevant parameter variable is. We say O(1) which suggests f(n)=k, but we don't say what n is. I wonder what the tacit assumption is, if any.
orcmid
Well, actually, I did say (typically) space. I'll leave it at that.
orcmid
Re your 1st comment - I just wanted to clarify that there's no reason for space and time complexity to be the same. No nead to change the Q unless you want to. It's worth noting that it's often a trade off between time and space.
Draemon
More important is to realise that each operation on an ADS can have differnet time and space complexities (insertion vs lookup), and there's the space taken up by the ADS itself. Usually we talk about the time complexity of an algorithm/operation and the space complexity of the ADS.
Draemon
RE your second comment - n is usally obvious from the context. Most data structures hold a number of identically typed entries, and n is the number of entries. You can even have more than one variable though that's rare.
Draemon
Pedantic comment: You can have O(N) time with O(1) space. You can't have O(1) time with O(N) space (well.. unless you allocate a spare buffer just for kicks). Thus, if an algorithm takes O(1) time, it must take O(1) space as well.
Tom
@Tom - I take your point, and it's probably a bad example, but it's not impossible. A simple memory allocation algorithm could well be O(1) in time an O(N) in space, and I'm sure there are other (possibly contrived) examples. My point was just that they aren't necessarily the same.
Draemon
Even the best memory allocation algorithm is likely to be O(n) or at least O(log n) - there's overhead in updating the page tables.
Nick Johnson
Sure, but this is theoretical. As I say - bad example. I suspect some lightweight/embedded systems do have O(1) though.
Draemon
+2  A: 

I can't speak to the other discussions you've seen, but there is at least one hashing algorithm that is guaranteed to be O(1).

Cuckoo hashing maintains an invariant so that there is no chaining in the hash table. Insertion is amortized O(1), retrieval is always O(1). I've never seen an implementation of it, it's something that was newly discovered when I was in college. For relatively static data sets, it should be a very good O(1), since it calculates two hash functions, performs two lookups, and immediately knows the answer.

Mind you, this is assuming the hash calcuation is O(1) as well. You could argue that for length-K strings, any hash is minimally O(K). In reality, you can bound K pretty easily, say K < 1000. O(K) ~= O(1) for K < 1000.

Tom
Thanks, that's interesting. I've not seen anyone recommend a Cuckoo hashing function, or suggest that it is built into a dictionary framework. Do you know where it was published. It would be interesting to see what it takes to get reasonable O(1) and what the ideal conditions are.
orcmid
http://www.itu.dk/people/pagh/papers/cuckoo-jour.pdf is the original. Wikipedia has a good description as well at http://en.wikipedia.org/wiki/Cuckoo_hashing
Tom
That's a beautiful paper. I just scanned it quickly and looked over the careful analysis and the comparisons. The tradeoff seems to be around the cost of running two hashes, including for the displacement problem. The wikipedia article is useful too. I must find a way to try this someday.
orcmid
+2  A: 

There may be a conceptual error as to how you're understanding Big-Oh notation. What it means is that, given an algorithm and an input data set, the upper bound for the algorithm's run time depends on the value of the O-function when the size of the data set tends to infinity.

When one says that an algorithm takes O(n) time, it means that the runtime for an algorithm's worst case depends linearly on the size of the input set.

When an algorithm takes O(1) time, the only thing it means is that, given a function T(f) which calculates the runtime of a function f(n), there exists a natural positive number k such that T(f) < k for any input n. Essentially, it means that the upper bound for the run time of an algorithm is not dependent on its size, and has a fixed, finite limit.

Now, that does not mean in any way that the limit is small, just that it's independent of the size of the input set. So if I artificially define a bound k for the size of a data set, then its complexity will be O(k) == O(1).

For example, searching for an instance of a value on a linked list is an O(n) operation. But if I say that a list has at most 8 elements, then O(n) becomes O(8) becomes O(1).

In this case, it we used a trie data structure as a dictionary (a tree of characters, where the leaf node contains the value for the string used as key), if the key is bounded, then its lookup time can be considered O(1) (If I define a character field as having at most k characters in length, which can be a reasonable assumption for many cases).

For a hash table, as long as you assume that the hashing function is good (randomly distributed) and sufficiently sparse so as to minimize collisions, and rehashing is performed when the data structure is sufficiently dense, you can indeed consider it an O(1) access-time structure.

In conclusion, O(1) time may be overrated for a lot of things. For large data structures the complexity of an adequate hash function may not be trivial, and sufficient corner cases exist where the amount of collisions lead it to behave like an O(n) data structure, and rehashing may become prohibitively expensive. In which case, an O(log(n)) structure like an AVL or a B-tree may be a superior alternative.

Daishiman
No fair fixing n to a constant and then claiming O(1). You can do that to all computations of any complexity. Sorry, take a flag on that play.On the other hand, your discussion points out how often what the controlling variable is. I agree that size of the dataset appeals as simplistic n.
orcmid
+40  A: 

The problem is that people are really sloppy with terminology. There are 3 important but distinct classes here:

O(1) worst-case

This is simple - all operations take no more than a constant amount of time in the worst case, and therefore in all cases. Accessing an element of an array is O(1) worst-case.

O(1) amortized worst-case

Amortized means that not every operation is O(1) in the worst case, but for any sequence of N operations, the total cost of the sequence is no O(N) in the worst case. This means that even though we can't bound the cost of any single operation by a constant, there will always be enough "quick" operations to make up for the "slow" operations such that the running time of the sequence of operations is linear in the number of operations.

For example, the standard Dynamic Array which doubles its capacity when it fills up requires O(1) amortized time to insert an element at the end, even though some insertions require O(N) time - there are always enough O(1) insertions that inserting N items always takes O(N) time total.

O(1) average-case

This one is the trickiest. There are two possible definitions of average-case: one for randomized algorithms with fixed inputs, and one for deterministic algorithms with randomized inputs.

For randomized algorithms with fixed inputs, we can calculate the average-case running time for any given input by analyzing the algorithm and determining the probability distribution of all possible running times and taking the average over that distribution (depending on the algorithm, this may or may not be possible due to the Halting Problem).

In the other case, we need a probability distribution over the inputs. For example, if we were to measure a sorting algorithm, one such probability distribution would be the distribution that has all N! possible permutations of the input equally likely. Then, the average-case running time is the average running time over all possible inputs, weighted by the probability of each input.

Since the subject of this question is hash tables, which are deterministic, I'm going to focus on the second definition of average-case. Now, we can't always determine the probability distribution of the inputs because, well, we could be hashing just about anything, and those items could be coming from a user typing them in or from a file system. Therefore, when talking about hash tables, most people just assume that the inputs are well-behaved and the hash function is well behaved such that the hash value of any input is essentially randomly distributed uniformly over the range of possible hash values.

Take a moment and let that last point sink in - the O(1) average-case performance for hash tables comes from assuming all hash values are uniformly distributed. If this assumption is violated (which it usually isn't, but it certainly can and does happen), the running time is no longer O(1) on average.

See also Denial of Service by Algorithmic Complexity. In this paper, the authors discuss how they exploited some weaknesses in the default hash functions used by two versions of Perl to generate large numbers of strings with hash collisions. Armed with this list of strings, they generated a denial-of-service attack on some webservers by feeding them these strings that resulted in the worst-case O(N) behavior in the hash tables used by the webservers.

Adam Rosenfield
Interesting look at the question. Not sure why you have this very large blue/link area. I think there is something off in your presentation of the amortized case. You mean it is *not* O(n) in the worst case? How about O(log n)? Maybe we mean amortized cost is still bounded constant above?
orcmid
I am nervous about the mixing of O(1) and a statistical effort to identify average behavior. We might be able to talk about the asymptotic behavior of the average in big-O terms. I've seen situations where neither keys nor hashes were uniformly distributed and "usually" is no help if you have one.
orcmid
So the moral of the story is: use large bold text if you want upvotes. I kid, I kid. :)
Kip
No, the moral of the story is have the top-ranking answer on a post linked to from the SO blog =)
Adam Rosenfield
Nice explanation, +1. One nitpick though, for the "O(1) amortized worst-case" I think there is an additional constraint on N, namely that it be sufficiently large. If we are free to choose any value for N, we could just choose N=1 and this would imply "O(1) amortized" = "O(1)".
j_random_hacker
A: 

O(1) means, exactly, that the algorithm's time complexity is bounded by a fixed value. This doesn't mean it's constant, only that it is bounded regardless of input values. Strictly speaking, many allegedly O(1) time algorithms are not actually O(1) and just go so slowly that they are bounded for all practical input values.

Wedge
+1  A: 

Yes, garbage collection does affect the asymptotic complexity of algorithms running in the garbage collected arena. It is not without cost, but it is very hard to analyze without empirical methods, because the interaction costs are not compositional.

The time spent garbage collecting depends on the algorithm being used. Typically modern garbage collectors toggle modes as memory fills up to keep these costs under control. For instance, a common approach is to use a Cheney style copy collector when memory pressure is low because it pays cost proportional to the size of the live set in exchange for using more space, and to switch to a mark and sweep collector when memory pressure becomes greater, because even though it pays cost proportional to the live set for marking and to the whole heap or dead set for sweeping. By the time you add card-marking and other optimizations, etc. the worst case costs for a practical garbage collector may actually be a fair bit worse, picking up an extra logarithmic factor for some usage patterns.

So, if you allocate a big hash table, even if you access it using O(1) searches for all time during its lifetime, if you do so in a garbage collected environment, occasionally the garbage collector will traverse the entire array, because it is size O(n) and you will pay that cost periodically during collection.

The reason we usually leave it off of the complexity analysis of algorithms is that garbage collection interacts with your algorithm in non-trivial ways. How bad of a cost it is depends a lot on what else you are doing in the same process, so the analysis is not compositional.

Moreover, above and beyond the copy vs. compact vs. mark and sweep issue, the implementation details can drastically affect the resulting complexities:

  1. Incremental garbage collectors that track dirty bits, etc. can all but make those larger re-traversals disappear.
  2. It depends on whether your GC works periodically based on wall-clock time or runs proportional to the number of allocations.
  3. Whether a mark and sweep style algorithm is concurrent or stop-the-world
  4. Whether it marks fresh allocations black if it leaves them white until it drops them into a black container.
  5. Whether your language admits modifications of pointers can let some garbage collectors work in a single pass.

Finally, when discussing an algorithm, we are discussing a straw man. The asymptotics will never fully incorporate all of the variables of your environment. Rarely do you ever implement every detail of a data structure as designed. You borrow a feature here and there, you drop a hash table in because you need fast unordered key access, you use a union-find over disjoint sets with path compression and union by rank to merge memory-regions over there because you can't afford to pay a cost proportional to the size of the regions when you merge them or what have you. These structures are thought primitives and the asymptotics help you when planning overall performance characteristics for the structure 'in-the-large' but knowledge of what the constants are matters too.

You can implement that hash table with perfectly O(1) asymptotic characteristics, just don't use garbage collection; map it into memory from a file and manage it yourself. You probably won't like the constants involved though.

Edward Kmett