views:

303

answers:

2
+6  Q: 

Fastest C++ map?

Correct me I'm wrong but std::map is an ordered map, thus each time I insert a value the map uses an algorithm to sort its items internally, which takes some time.

My application gets information regarding some items on a constant interval.

This app keeps a map which is defined like this:

::std::map<DWORD, myItem*>

At first all items are considered "new" to the app. An "Item" object is being allocated and added to this map, associating its id and a pointer to it.

When it's not a "new" item (just an update of this object) my app should find the object at the map, using the given id, and update.

Most of the times I get updates.

My question is:
Is there any faster map implementation or should I keep using this one?
Am I better use unordered_map?

+17  A: 

Am I better use unordered_map?

Possibly.

std:map provides consistent performance at O(log n) because it needs to be implemented as a balanced tree. But std:unordered_map will be implemented as a hash table which might give you O(1) performance (good hash function and distribution of keys across hash buckets), but it could be O(n) (everything in one hash bucket and devolves to a list). One would normally expect something inbetween these extremes.

So you can have reasonable performance (O(log n)) all the time, or you need to ensure everything lines up to get good performance with a hash.

As with any such question: you need to measure before committing to one approach. Unless your datasets are large you might find there is no significant difference.

Richard
+1 for measuring.
Joe
So you're saying there's no guiding line to tell uh? Well, the id's to be associated with the items in the map really vary yet if you take only one source that provides items information, the id's are in pretty close distribution.
Poni
"Large" in this context means really, truly, astronomically large. When comparing algorithms of O(1) and O(logn) the constant factors tend to dominate the runtime on any dataset that you can store in memory all at once. If performance really matters, always make real measurements for representative cases.
Alan
A (properly implemented) hash table has amortized constant time complexity guarantees, which is good enough 99% of the times.
Staffan
Ok Staffan - now being specific? If my "map needs" are associating number with number (i.e a DWORD which is the id and a pointer which is eventually a number) - can you recommend a specific one?
Poni
@Poni: A hash table (e.g. boost::unordered_map) *should* be faster for that use case, yes.Generally speaking, it has better complexity. Let me clarify what I said above about amortized complexity a bit; if you do *one* operation on a hash table it has O(n) time complexity worst case, but it you do infinitely many (for practical purposes, say a 1000 or so) they will have O(1) time complexity on average. Usually, this is what you care about.But as others have said, a lot comes down to the constant factors if your data sets aren't really huge -- profile!
Staffan
@Poni: Also, in practice, the key type (a DWORD in your case) used affects what's the best choice. Computing a hash for a string of length m takes Θ(m) (Big Theta) time, whereas the string comparison used for the regular map has complexity O(m) (Big O).I.e., when you calculate the hash of "Hello" you have to look at all the characters. But when you for example compare "Hello" and "World" you only need to look at one character!
Staffan
@Staffan: +1 for underlining that the hash method affected the constants!
Matthieu M.
Depending on the strings used, it may be allowable to hash not every character of a string. For instance, in C implementations identifiers are considered equal if the first 31 characters were. Obviously, in that context an identifier hash can hash just the first 31 characters.
MSalters
+4  A: 

Important warning: Unless you have measured (and your question suggests that you hasn't) that map performance substantially influences your application performance (large percentage of time is spent on searching and updating the map) don't bother with making it faster. Stick to std::map (or std::unordered_map or any available hash_map implementation). Speeding up your application by 1% probably will not be worth the effort. Make it bug free instead.

Echoing Richard's answer: measure performance with different map implementation using your real classes and real data.

Some additional notes:

  • Understand the difference between expected cost (hash maps usually have it lower), worst case cost (O(logn) for balanced binary tree but much higher for hash map if insert triggers reallocation of hash array) and amortized cost (total cost divided by number of operations or elements; depends on things like ratio of new and existing elements). You need to find out which is more constraining in your case. For example reallocating of hash maps can be too much if you need to adhere to very low latency limit.

  • Find out where real bottleneck is. It might be that cost of searching in map is insignificant compared to e.g. IO cost.

  • Try more specialized map implementation. For example a lot can be gained if you know something more about map's key. Authors of generic map implementations do not have such knowledge.

In your example (32 bit unsigned integer keys which strongly cluster, e.g. are assigned sequentially) you can use radix based approach. Very simple example (threat it as an illustration, not ready to use recipe):

Item *sentinel[65536];  // sentinel page, initialized to NULLs.
Item (*pages[65536])[65536];  // list of pages,
                              // initialized so every element points to sentinel

Then search is as simple as:

Item *value = pages[index >> 16][index & 0xFFFF];

When you need to set new value:

if (pages[index >> 16] == sentinel) {
  pages[index >> 16] = allocate_new_null_filled_page();
}
pages[index >> 16][index & 0xFFFF] = value;
  • Tweak your map implementation.

    • E.g. every hash_map likes to know approximate number of elements in advance. It helps avoid unnecessary reallocation of hash table and (possibly) rehashing of all keys.

    • With my specialized example above you certainly would try different page sizes, or three level version.

    • Common optimization is providing specialized memory allocator to avoid multiple allocations of small objects.

Tomek Szpakowicz
Thought of having an array instead of the map yet this approach limits me to a certain number of items and their id's because the id's are not exactly assigned sequentially. Besides that, you did gave some news, I didn't really think of allocating them on-air, and this seems very correct. Thank you!
Poni
@tomek: Would you kindly show how you'd allocate the new page?
Poni
@Poni: Seriously? Just `new` it and `memset` with zeroes.Maybe stay with `std::map` (or `std::unordered_map`) for now...
Tomek Szpakowicz
Yup, cought me on that one. I'm talking about the "new" statement, as in "new .....". Confused me, unfortunately.
Poni