views:

1116

answers:

6

If I have, say, 100 items that'll be stored in a dictionary, should I initialise it thus?

var myDictionary = new Dictionary<Key, Value>(100);

My understanding is that the .NET dictionary internally resizes itself when it reaches a given loading, and that the loading threshold is defined as a ratio of the capacity.

That would suggest that if 100 items were added to the above dictionary, then it would resize itself when one of the items was added. Resizing a dictionary is something I'd like to avoid as it has a performance hit and is wasteful of memory.

The probability of hashing collisions is proportional to the loading in a dictionary. Therefore, even if the dictionary does not resize itself (and uses all of its slots) then the performance must degrade due to these collisions.

How should one best decide what capacity to initialise the dictionary to, assuming you know how many items will be inside the dictionary?

+4  A: 

I think you're over-complicating matters. If you know how many items will be in your dictionary, then by all means specify that on construction. This will help the dictionary to allocate the necessary space in its internal data structures to avoid reallocating and reshuffling data.

HTH, Kent

Kent Boogaart
By "expand" you mean double, correct?
StingyJack
@StingyJack: not necessarily. For implementation reasons, the dictionary class does not double its storage. Rather, space is allocated to accomodate a prime number of elements because this makes collisions through modulus much rarer.
Konrad Rudolph
I agree Kent. I should have tagged this question as 'academic'. Dictionaries are key (pun intentional) programming constructs and I like nutting out the trivia on such everyday things as this. My primary question is: does allocating *extra* space reduce collisions and increase performance?
Drew Noakes
A: 

Yes, contrary to a HashTable which uses rehashing as the method to resolve collisions, Dictionary will use chaining. So yes, it's good to use the count. For a HashTable you probably want to use count * (1/fillfactor)

Mehrdad Afshari
The distinction between rehashing and chaining is an interesting one to note. Thanks. In either case though, there's still some kind of collision resolution taking place that's going to have *some* impact on performance. Are you saying that this is less when chaining?
Drew Noakes
It's related to the average length of a chain which in turn is related to number of collisions.
Mitch Wheat
Nope, I'm not saying it's less. It depends. But when you use chaining, the storage space used by the links are not counted in the hash table itself which reduces the need of more space if a collision takes place.
Mehrdad Afshari
+1  A: 

Specifying the initial capacity to the Dictionary constructor increases performance because there will be fewer number of resizes to the internal structures that store the dictionary values during ADD operations.

Considering that you specify a initial capacity of k to the Dictionary constructor then:

  1. The Dictionary will reserve the amount of memory necessary to store k elements;
  2. QUERY performance against the dictionary is not affected and it will not be faster or slower;
  3. ADD operations will not require more memory allocations (perhaps expensive) and thus will be faster.

From MSDN:

The capacity of a Dictionary(TKey, TValue) is the number of elements that can be added to the Dictionary(TKey, TValue) before resizing is necessary. As elements are added to a Dictionary(TKey, TValue), the capacity is automatically increased as required by reallocating the internal array.

If the size of the collection can be estimated, specifying the initial capacity eliminates the need to perform a number of resizing operations while adding elements to the Dictionary(TKey, TValue).

smink
I agree with the documentation :) Still, what I want to know is whether giving *extra* size will reduce the number of collision resolutions and hence improve performance at the cost of some additional memory wastage.
Drew Noakes
If you are talking about performance of QUERIES against the dictionary no, it will not be faster. The initial capacity k will reserve the amount of memory necessary to store k elements. ADD operations will not require more memory allocations (perhaps expensive) and thus will be faster.
smink
@smink, I don't quite agree with you here. The dictionary's lookup process looks in a 'bucket' based upon the hashcode. Multiple entries might prefer that bucket, but the first to be added gets it. Others are chained, meaning that lookup for those others is not as efficient as for the first.
Drew Noakes
@smink, furthermore, having a larger initial dictionary size would reduce the number of hashing collisions and therefore reduce the average chain length, improving lookup speeds (though potentially marginally).
Drew Noakes
+4  A: 

I did a quick test, probably not scientific, but if I set the size it took 1.2207780 seconds to add one million items and it took 1.5024960 seconds to add if I didn't give the Dictionary a size... this seems negligible to me.

Here is my test code, maybe someone can do a more rigorous test but I doubt it matters.

static void Main(string[] args)
     {
      DateTime start1 = DateTime.Now;
      var dict1 = new Dictionary<string, string>(1000000);

      for (int i = 0; i < 1000000; i++)
       dict1.Add(i.ToString(), i.ToString());

      DateTime stop1 = DateTime.Now;

      DateTime start2 = DateTime.Now;
      var dict2 = new Dictionary<string, string>();

      for (int i = 0; i < 1000000; i++)
       dict2.Add(i.ToString(), i.ToString());

      DateTime stop2 = DateTime.Now;

      Console.WriteLine("Time with size initialized: " + (stop1.Subtract(start1)) + "\nTime without size initialized: " + (stop2.Subtract(start2)));
      Console.ReadLine();
     }
jhunter
Interesting. For future reference, you should use the System.Diagnostics.Stopwatch class when measuring times such as these. DateTime.Now will only give you 15ms resolution, but Stopwatch gives something like 0.01ms resolution.
Drew Noakes
What I want to know is whether specifying a size of, say 2,000,000 and adding 1,000,000 is faster due to the reduced loading and therefore reduced chaining.
Drew Noakes
Ditto on using System.Diagnostics.Stopwatch as opposed to DateTime.Now
Mitch Wheat
A: 

The initial size is just a suggestion. For example, most hash tables like to have sizes that are prime numbers or a power of 2.

Jonathan Allen
A hashtable with a power of 2 size? Does it perform well?
Mehrdad Afshari
Primes sound better than powers of 2 to me. The .NET framework (mscorlib.dll v2.0.0.0) calls the internal method HashHelpers.GetPrime to find the next largest prime number after 'capacity'. It searches a cache of primes and performs a brute force search if the capacity is larger than 7,199,369 :)
Drew Noakes
+1  A: 

What you should initialize the dictionary capacity to depends on two factors: (1) The distribution of the gethashcode function, and (2) How many items you have to insert.

Your hash function should either be randomly distributed, or it is should be specially formulated for your set of input. Let's assume the first, but if you are interested in the second look up perfect hash functions.

If you have 100 items to insert into the dictionary, a randomly distributed hash function, and you set the capacity to 100, then when you insert the ith item into the hash table you have a (i-1) / 100 probability that the ith item will collide with another item upon insertion. If you want to lower this probability of collision, increase the capacity. Doubling the expected capacity halves the chance of collision.

Furthermore, if you know how frequently you are going to be accessing each item in the dictionary you may want to insert the items in order of decreasing frequency since the items that you insert first will be on average faster to access.

hhawk