views:

268

answers:

3

It seems to be common knowledge that hash tables can achieve O(1) but that has never made sense to me. Can someone please explain it?

A. The value is an int smaller than the size of the hash table, so the value is its own hash, so there is no hash table but if there was it would be O(1) and still be inefficient.

B. You have to calculate the hash, so the order is O(n) for the size of the data being looked up. The lookup might be O(1) after you do O(n) work, but that still comes out to O(n) in my eyes.

And unless you have a perfect hash or a large hash table there are probably several items per bucket so it devolves into a small linear search at some point anyway.

I think hash tables are awesome, but I do not get the O(1) designation unless it is just supposed to be theoretical.

Here is the wiki for hashing, and throughout the article it references constant lookup time and totally ignores the cost of the hash function. Is that really a fair measure?

Edit:

To summarize what I learned:

It is technically true because the hash function is not required to use all the information in the key and so could be constant time, and because a large enough table can bring collisions down to near constant time.

It is true in practice because over time it just works out as long as the hash function and table size are chosen to minimize collisions, even though that often means not using a constant time hash function.

+1  A: 

The hash is fixed size - looking up the appropriate hash bucket is a fixed cost operation. This means that it is O(1).

Calculating the hash does not have to be a particularly expensive operation - we're not talking cryptographic hash functions here. But that's by the by. The hash function calculation itself does not depend on the number n of elements; while it might depend on the size of the data in an element, this is not what n refers to. So the calculation of the hash does not depend on n and is also O(1).

David M
looking up the hash bucket is O(1). But locating the right key, is a O(n) procedure, where n depends on the number of hash collisions.
Nick D
So of 3 steps, calculate the hash, find the bucket, search the bucket, the middle step is constant? Searching the bucket is usually constant. Calculating the hash is usually several orders of magnitude cheaper than other means of finding the bucket. But does that really add up to constant time? In a naive substring search, you would say O(n*m) for the two lengths, so why is the length of the key disregarded here?
drawnonward
finding a fixed length key is only O(n) only if its list backed, a balanced tree backed hash table will be O(log(n))
jk
+1  A: 

You have two variables here, m and n, where m is the length of the input and n is the number of items in the hash.

The O(1) lookup performance claim makes at least two assumptions:

  • Your objects can be equality compared in O(1) time.
  • There will be few hash collisions.

If your objects are variable size and an equality check requires looking at all bits then performance will become O(m). The hash function however does not have to be O(m) - it can be O(1). Unlike a cryptographic hash, a hash function for use in a dictionary does not have to look at every bit in the input in order to calculate the hash. Implementations are free to look at only a fixed number of bits.

For sufficiently many items the number of items will become greater than the number of possible hashes and then you will get collisions causing the performance rise above O(1), for example O(n) for a simple linked list traversal (or O(n*m) if both assumptions are false).

In practice though the O(1) claim while technically false, is approximately true for many real world situations, and in particular those situations where the above assumptions hold.

Mark Byers
As well as the above, if you are using immutable objects as your keys e.g. Java Strings, having calculated the hash once, you can remember it and not have to calculate it again. On the other hand, you can't usually rely on the hash to tell if two keys are equal once you have found the right bucket, so for strings you need to do an O(m) traversal to find out if they are equal.
JeremyP
@JeremyP: Good point on the O(m) equality comparison. I missed that - updated post. Thanks!
Mark Byers
+2  A: 

You have to calculate the hash, so the order is O(n) for the size of the data being looked up. The lookup might be O(1) after you do O(n) work, but that still comes out to O(n) in my eyes.

What? To hash a single element takes constant time. Why would it be anything else? If you're inserting n elements, then yes, you have to compute n hashes, and that takes linear time... to look an element up, you compute a single hash of what you're looking for, then find the appropriate bucket with that. You don't re-compute the hashes of everything that's already in the hash table.

And unless you have a perfect hash or a large hash table there are probably several items per bucket so it devolves into a small linear search at some point anyway.

Not necessarily. The buckets don't necessarily have to be lists or arrays, they can be any container type, such as a balanced BST. That means O(log n) worst case. But this is why it's important to choose a good hashing function to avoid putting too many elements into one bucket. As KennyTM pointed out, on average, you will still get O(1) time, even if occasionally you have to dig through a bucket.

The trade off of hash tables is of course the space complexity. You're trading space for time, which seems to be the usual case in computing science.


You mention using strings as keys in one of your other comments. You're concerned about the amount of time it takes to compute the hash of a string, because it consists of several chars? As someone else pointed out again, you don't necessarily need to look at all the chars to compute the hash, although it might produce a better hash if you did. In that case, if there are on average m chars in your key, and you used all of them to compute your hash, then I suppose you're right, that lookups would take O(m). If m >> n then you might have a problem. You'd probably be better off with a BST in that case. Or choose a cheaper hashing function.

Mark
hash tables don't use BSTs. BSTs don't require hash values. Maps and Sets can be implemented as BSTs though.
Nick D
@Nick: Eh? No...BSTs don't require hash values... that's the point. We're assuming that at this point we already have a collision (same hash... or at least same bucket), so we need to look at something else to find the right element, i.e., the actual value.
Mark
oh, I see your point. But I'm not sure that mixing BSTs and hashes worth the trouble. Why not just use BSTs?
Nick D
I'm just saying that you *could* to get rid of that `O(n)` for collisions. If you *are* expecting lots of collisions, then you're right, probably better off going with a BST in the first place.
Mark