When a C++ function accepts an std::vector
argument, the usual pattern is to pass it by const
reference, such as:
int sum2(const std::vector<int> &v)
{
int s = 0;
for(size_t i = 0; i < v.size(); i++) s += fn(v[i]);
return s;
}
I believe that this code results in double dereferencing when the vector elements are accessed, because the CPU should first dereference v
to read the pointer to the first element, which pointer needs to be dereferenced again to read the first element. I would expect that it would be more efficient to pass a shallow copy of the vector object on the stack. Such shallow copy would encapsulate a pointer to the first element, and the size, with the pointer referencing the same memory area as the original vector does.
int sum2(vector_ref<int> v)
{
int s = 0;
for(size_t i = 0; i < v.size(); i++) s += fn(v[i]);
return s;
}
Similar performance, but much less convenience could be achieved by passing a random access iterator pair. My question is: what is flawed with this idea? I expect that there should be some good reason that smart people accept to pay the performace cost of vector reference, or deal with the inconvenience of iterators.
Edit: Based on the coments below, please consider the situation if I simply rename the suggested vector_ref
class to slice or range. The intention is to use random-access iterator pairs with more natural syntax.