views:

369

answers:

4

I've got a huge (>>10m) list of entries. Each entry offers two hash functions:

  • Cheap: quickly computes hash, but its distribution is terrible (may put 99% of items in 1% of hash space)
  • Expensive: takes a lot of time to compute, but the distribution is a lot better also

An ordinary Dictionary lets me use only one of these hash functions. I'd like a Dictionary that uses the cheap hash function first, and checks the expensive one on collisions.

It seems like a good idea to use a dictionary inside a dictionory for this. I currently basically use this monstrosity:

Dictionary<int, Dictionary<int, List<Foo>>>;

I improved this design so the expensive hash gets called only if there are actually two items of the same cheap hash.

It fits perfectly and does a flawless job for me, but it looks like something that should have died 65 million years ago.

To my knowledge, this functionality is not included in the basic framework. I am about to write a DoubleHashedDictionary class but I wanted to know of your opinion first.

As for my specific case:
First hash function = number of files in a file system directory (fast) Second hash function = sum of size of files (slow)

Edits:

  • Changed title and added more informations.
  • Added quite important missing detail
+1  A: 

Have you taken a look at the Power Collections or C5 Collections libaries? The Power Collections library hasn't had much action recently, but the C5 stuff seems to be fairly up to date.

I'm not sure if either library has what you need, but they're pretty useful and they're open source so it may provide a decent base implementation for you to extend to your desired functionality.

Brian Hasden
I checked both of them, but it does not seem like they implement something I could use here.
mafutrct
+1  A: 

In your case, you are technically using a modified function (A|B), not a double-hashed. However, depending on how huge your "huge" list of entries is and the characteristics of your data, consider the following:

  • A 20% full hash table with a not-so-good distribution can have more than 80% chance of collision. This means your expected function cost could be: (0.8 expensive + 0.2 cheap) + (cost of lookups). So if your table is more than 20% full it may not be worth using the (A|B) scheme.

  • Coming up with a perfect hash function is possible but O(n^3) which makes it impractical.

  • If performance is supremely important, you can make a specifically tuned hash table for your specific data by testing various hash functions on your key data.
MandoMando
The list may contain about 100k-100m items. The 20/80 case you describes may very well happen. Still, In my tests two hashes still performed considerably better than always using the expensive hash.
mafutrct
Just saw your hash functions; Have you tried a mixed scheme such as #files in System + size of first(n) files? I'd imagine there would be a small n that would give you the best bang for the buck. Other functions that perform well would be multiplying the cheap# by the size of first (n) entries.Essentially adding a little salt to the function output. You'd be surprised how quickly the distribution goes your way.
MandoMando
Yes, I thought about using only the first n sizes. But sadly this approach requires the same order of files in all directories and I this can't be guaranteed. Something like "only the n biggest files" also fails obviously.
mafutrct
Yes, it would. Could you use and mix other attributes such as datetime, name, id, etc? That would avoid performing disk operations (I'm assuming there is a large disk cost over ram computations).
MandoMando
@fyjham has a good point. Nested dictionaries don't necessarily outperform a single flat one.
MandoMando
No, Datetime etc is not possible in my case. I'm comparing directories by comparing the content of all contained files.
mafutrct
+1  A: 

You're basically talking about a hash table of hash table's, each using a different GetHashCode implementation... while it's possible I think you'd want to consider seriously whether you'll actually get a performance improvement over just doing one or the other...

Will there actually be a substantial number of objects that will be located via the quick-hash mechanism without having to resort to the more expensive to narrow it down further? Because if you can't locate a significant amount purely off the first calculation you really save nothing by doing it in two steps (Not knowing the data it's hard to predict whether this is the case).

If it is going to be a significant amount located in one step then you'll probably have to do a fair bit of tuning to work out how many records to store on each hash location of the outer before resorting to an inner "expensive" hashtable lookup rather than the more treatment of hashed data, but under certain circumstances I can see how you'd get a performance gain from this (The circumstances would be few and far between, but aren't inconceivable).

Edit

I just saw your ammendment to the question - you plan to do both lookups regardless... I doubt you'll get any performance benefits from this that you can't get just by configuring the main hash table a bit better. Have you tried using a single dictionary with an appropriate capacity passed in the constructor and perhaps an XOR of the two hash codes as your hash code?

Tim Schneider
The second hash completely overrides the first, so I just used the dictionary with a single expensive function (instead of XOR). But it turned out to be slower.
mafutrct
+1  A: 

First off, I think you're on the right path to implement your own hashtable, if what you are describing is truely desired.But as a critic, I'd like to ask a few questions:

Have you considered using something more unique for each entry?

I am assuming that each entry is a file system directory information, have you considered using its full path as key? prefixing with computer name/ip address?

On the other hand, if you're using number of files as hash key, are those directories never going to change? Because if the hash key/result changes, you will never be able to find it again.

While on this topic, if the directory content/size is never going to change, can you store that value somewhere to save the time to actually calculate that?

Just my few cents.

Bill Yang
In my case, using the path or similar cheap parts of the DirectoryInfo is impossible, sadly. --- I more or less took care of handling changes in the file system, this is not much of an issue. --- Yes, I am already massively caching. I think it would take years to perform the calculations else. :)
mafutrct
accepting this answer as it is closest to what i did - implementing a new, fully featured dictionary class with two hash functions.
mafutrct