I have some data structures:
all_unordered_m
is a big vector containing all the strings I need (all different)ordered_m
is a small vector containing the indexes of a subset of the strings (all different) in the former vectorposition_m
maps the indexes of objects from the first vector to their position in the second one.
The string_after(index, reverse)
method returns the string referenced by ordered_m after all_unordered_m[index]
.
ordered_m
is considered circular, and is explored in natural or reverse order depending on the second parameter.
The code is something like the following:
struct ordered_subset {
// [...]
std::vector<std::string>& all_unordered_m; // size = n >> 1
std::vector<size_t> ordered_m; // size << n
std::tr1::unordered_map<size_t, size_t> position_m;
const std::string&
string_after(size_t index, bool reverse) const
{
size_t pos = position_m.find(index)->second;
if(reverse)
pos = (pos == 0 ? orderd_m.size() - 1 : pos - 1);
else
pos = (pos == ordered.size() - 1 ? 0 : pos + 1);
return all_unordered_m[ordered_m[pos]];
}
};
Given that:
- I do need all of the data-structures for other purposes;
- I cannot change them because I need to access the strings:
- by their id in the all_unordered_m;
- by their index inside the various ordered_m;
- I need to know the position of a string (identified by it's position in the first vector) inside ordered_m vector;
- I cannot change the string_after interface without changing most of the program.
How can I speed up the string_after
method that is called billions of times and is eating up about 10% of the execution time?
EDIT:
I've tried making position_m
a vector
instead of a unordered_map
and using the following method to avoid jumps:
string_after(size_t index, int direction) const
{
return all_unordered_m[ordered_m[
(ordered_m.size()+position_m[index]+direction)%ordered_m.size()]];
}
The change in position_m seems to be a the most effective (I'm not sure that eliminating the branches made any difference, I'm tempted to say that the code is more compact but equally efficient with that regard).