tags:

views:

349

answers:

9

In order to give functions the option to modify the vector I can't do

curr = myvec.at( i );
doThis( curr );
doThat( curr );
doStuffWith( curr );

But I have to do:

doThis( myvec.at( i ) );
doThat( myvec.at( i ) );
doStuffWith( myvec.at( i ) );

(as the answers of my other question pointed out)

  • I'm going to make a hell lot of calls to myvec.at() then. How fast is it, compared to the first example using a variable to store the result?

  • Is there a different option for me? Can I somehow use pointers?

When it's getting serious there will be thousands of calls to myvec.at() per second. So every little performance-eater is important.

+2  A: 

Operator[] might be faster than at, because it isn't required to do bounds checking.

You can make curr a reference to do what you want.

MyClass & curr = myvec.at(i);

You might also do some benchmarking before getting worried. Modern processors can handle thousands of operations per second quite easily.

Mark Ransom
+13  A: 

You can use a reference:

int &curr = myvec.at(i);
// do stuff with curr

The at member function does bounds checking to make sure the argument is within the size of the vector. Profiling is only way to know exactly how much slower it is compared to operator[]. Using a reference here allows you to do the lookup once and then use the result in other places. And you can make it a reference-to-const if you want to protect yourself from accidentally changing the value.

Kristo
Depends what "to give functions the option to modify the vector" means. Some vector modifications invalidate references - anything that causes reallocation.
Steve Jessop
@Steve, that's a good point. I'm assuming the OP is modifying *values* within the vector based on the given sample code. If the functions might modify the vector itself, we'll need a different approach.
Kristo
+1  A: 

When performance is an issue, there is no substitute for profiling. The optimization capabilities of compilers change from version to version, and tiny, insignificant alterations to source code can dramatically change the resulting performace.

No one can answer this question but yourself: Create a test harness, and throw several algorithms at it and see what you get.

ps. if performance really is an issue, well, i got a factor 10 speed increase out of a png decoder by removing the vectors and replacing them with raw arrays. Again, this was for Visual Studio 6. I am not claiming that a raw array substitution will give you a factor 10 improvement, but it is something to try.

Chris Becke
I don´t know about VS6 (luckily I was able to use gcc at the time), but a ten fold speed increments seems like a whole of a lot of difference. Were you using the same algorithms in both cases? were the vector size hinted into the implementation? Current implementation of a vector in g++ contains three pointers, to `begin()`, `end()` and `begin()+capacity`, unless you enable checked iterators, operations on a vector are as fast as operations on raw data.
David Rodríguez - dribeas
The algorithm was the same. we just changed vectors of bytes into raw arrays of bytes. functions were changed to take pointers, and where necessary integers being the size of the array. Some loops were rolled into memcpy's.
Chris Becke
A: 

The complexity of at() is constant, i.e., in practice this means that it must be designed to not have any relevant performance penalty.

You can use [], which is also constant complexity, but does not check bounds. This would be equivalent to using pointer arithmetic and, thus, potentially a bit faster than the former.

In any case, vector is specifically designed for constant performance access to any of its elements. So this should be the least of your worries.

Daniel Daranas
+2  A: 

The reason the first doesn't work is that you're not setting a pointer or iterator to the address of the ith variable. Instead you're setting curr equal to the value of the ith variable and then modifying curr. I'm assuming that doThis and doThat are references.

Do this:

MyObject& curr = myvec.at( i );
wheaties
A: 

Options that I see, in roughly inverse order of preference:

  1. Store pointers in your container instead of the actual objects. This may be advisable anyway, if the objects are complex enough that copying them around is problematic.
  2. Use the indexing operator [] instead of at().
  3. Just call at() once, and save it into a reference (see Kristo's answer above).
  4. Forget about it until you actually have a problem with excessive runtime. If that happens, profile your code first to make sure the bottleneck is here, and only then worry about doing one of the above to speed things up.

Honestly, what you should do is play with the four different approaches, and just use the one that produces the easiest to understand code. In most cases we are happy to sacrifice a few machine cycles for code that is easier for human beings to maintain.

T.E.D.
+3  A: 

From my own tests with similar code (compiled under gcc and Linux), operator[] can be noticeably faster than at, not because of the bounds checking, but because of the overhead of exception handling. Replacing at (which throws an exception on out-of-bounds) with my own bounds checking that raised an assert on out-of-bounds gave a measurable improvement.

Using a reference, as Kristo said, lets you only incur the bounds checking overhead once.

Ignoring bounds checking and exception handling overhead, both operator[] and at should be optimized to equivalent to direct array access or direct access via pointer.

As Chris Becke said, though, there's no substitute for profiling.

Josh Kelley
Actually triggering exceptions is generally slow, checking for them isn't necessarily too slow.
Martin Beckett
@Martin: Depends a lot on the implementation. On quite a few, it's the stack unwinding which is expensive, not the "triggering" (throwing)
MSalters
@MSalters - yes but the test isn't necessarily expensive if the exception doesn't except.
Martin Beckett
A: 

Vectors are the most suited for access speed. Access to a random element in a vector is of complexity O(1) compared with O(n) for general linked-lists and O(log n) for link-trees.

However this question is mis-placed as the answer to your other question has lead you astray by not explaining how to fix your original problem by using a reference.

Richard Harrison
A: 

If it is the case, that you load up a vector, then process it without adding or deleting any more elements, then consider getting a pointer to the underlying array, and using array operations on that to 'avoid the vector overhead'.

If you are adding or deleting elements as part of your processing, then this is not safe to do, as the underlying array may be moved at any point by the vector itself.

EvilTeach