views:

460

answers:

2

I have a simple requirement, i need a map of type . however i need fastest theoretically possible retrieval time.

i used both map and the new proposed unordered_map from tr1 i found that at least while parsing a file and creating the map, by inserting an element at at time.

map took only 2 minutes while unordered_map took 5 mins.

As i it is going to be part of a code to be executed on Hadoop cluster and will contain ~100 million entries, i need smallest possible retrieval time.

Also another helpful information: currently the data (keys) which is being inserted is range of integers from 1,2,... to ~10 million.

I can also impose user to specify max value and to use order as above, will that significantly effect my implementation? (i heard map is based on rb trees and inserting in increasing order leads to better performance (or worst?) )

here is the code

map<int,int> Label // this is being changed to unordered_map  
fstream LabelFile("Labels.txt");  


// Creating the map from the Label.txt  
if (LabelFile.is_open())  
{  
    while (! LabelFile.eof() )  
    {             
        getline (LabelFile,inputLine);  
        try  
        {  
            curnode=inputLine.substr(0,inputLine.find_first_of("\t"));  
            nodelabel=inputLine.substr(inputLine.find_first_of("\t")+1,inputLine.size()-1);  
            Label[atoi(curnode.c_str())]=atoi(nodelabel.c_str());  
        }  
        catch(char* strerr)  
        {  
            failed=true;  
            break;  
        }  
    }  
    LabelFile.close(); 
}

Tentative Solution: After review of comments and answers, i believe a Dynamic C++ array would be the best option, since the implementation will use dense keys. Thanks

+7  A: 

Insertion for unordered_map should be O(1) and reterival should be roughly O(1), (its essentially a hash-table).

Your timings as a result are way OFF, or there is something WRONG with your implementation or usage of unordered_map.

You need to provide some more information, and possibly how you are using the container.

As per section 6.3 of n1836 the complexities for insertion/retreival are given:

One issue you should consider is that your implementation may need to continually be rehashing the structure, as you say you have 100mil+ items. In that case when instantiating the container, if you have a rough idea about how many "unique" elements will be inserted into the container, you can pass that in as a parameter to the constructor and the container will be instantiated accordingly with a bucket-table of appropriate size.

Beh Tou Cheh
yes from my dict experience in python a hash table should alway be faster compared to a binary tree based map, yet at least for insertion i am finding map to be faster than unordered_map.
ya its possible that rehashing would be leading to significant increase in time for insertions, since i am not providing any hint about possible number of elements.
A: 

unordered_map (at least in most implementations) gives fast retrieval, but relatively poor insertion speed compared to map. A tree is generally at its best when the data is randomly ordered, and at its worst when the data is ordered (you constantly insert at one end of the tree, increasing the frequency of re-balancing).

Given that it's ~10 million total entries, you could just allocate a large enough array, and get really fast lookups -- assuming enough physical memory that it didn't cause thrashing, but that's not a huge amount of memory by modern standards.

Edit: yes, a vector is basically a dynamic array.

Edit2: The code you've added some some problems. Your while (! LabelFile.eof() ) is broken. You normally want to do something like while (LabelFile >> inputdata) instead. You're also reading the data somewhat inefficiently -- what you apparently expecting is two numbers separated by a tab. That being the case, I'd write the loop something like:

while (LabelFile >> node >> label)
    Label[node] = label;
Jerry Coffin
The Problem is that i am hoping to extend implementation to handle possibly around billion entries.
It is going to handle networks with billion+ nodes. The map contains Label for each node in the network, the code will be implemented on hadoop in streaming mode.
@Mitch:yes, that's exactly what I said. @akshayubha: the question isn't really the number of entries, but the density of the keys. If it's a billion keys running from 1 to 1 billion, an array will be fine. If it's a billion keys that are (say) 128 bits apiece, an array won't work at all.
Jerry Coffin