views:

926

answers:

8

when using C++ vector, time spent is 718 milliseconds, while when I use Array, time is almost 0 milliseconds.

Why so much performance difference?

int _tmain(int argc, _TCHAR* argv[])
{
const int size = 10000; 
clock_t start, end; 
start = clock();
vector<int> v(size*size); 
for(int i = 0; i < size; i++)
{  
 for(int j = 0; j < size; j++)
 {   
  v[i*size+j] = 1;  
 } 
} 
end = clock();
cout<< (end - start)
 <<" milliseconds."<<endl; // 718 milliseconds

int f = 0;
start = clock(); 
int arr[size*size]; 
for(int i = 0; i < size; i++)
{  
 for(int j = 0; j < size; j++)
 {   
  arr[i*size+j] = 1;  
 } 
} 
end = clock();
cout<< ( end - start)
 <<" milliseconds."<<endl; // 0 milliseconds
return 0;
}
+17  A: 

Your array arr is allocated on the stack, i.e., the compiler has calculated the necessary space at compile time. At the beginning of the method, the compiler will insert an assembler statement like

sub esp, 10000*10000*sizeof(int)

which means the stack pointer (esp) is decreased by 10000 * 10000 * sizeof(int) bytes to make room for an array of 100002 integers. This operation is almost instant.

The vector is heap allocated and heap allocation is much more expensive. When the vector allocates the required memory, it has to ask the operating system for a contiguous chunk of memory and the operating system will have to perform significant work to find this chunk of memory.

As Andreas says in the comments, all your time is spent in this line:

vector<int> v(size*size);

Accessing the vector inside the loop is just as fast as for the array.

For an additional overview see e.g.

Edit:

After all the comments about performance optimizations and compiler settings, I did some measurements this morning. I had to set size=3000 so I did my measurements with roughly a tenth of the original entries. All measurements performed on a 2.66 GHz Xeon:

  1. With debug settings in Visual Studio 2008 (no optimization, runtime checks, and debug runtime) the vector test took 920 ms compared to 0 ms for the array test.

    98,48 % of the total time was spent in vector::operator[], i.e., the time was indeed spent on the runtime checks.

  2. With full optimization, the vector test needed 56 ms (with a tenth of the original number of entries) compared to 0 ms for the array.

    The vector ctor required 61,72 % of the total application running time.

So I guess everybody is right depending on the compiler settings used. The OP's timing suggests an optimized build or an STL without runtime checks.

As always, the morale is: profile first, optimize second.

Sebastian
+1 Yes, move `vector<int> v(size*size); ` out of the timing and there shouldn't be any difference.
Andreas Brinck
you may also need to allow the compiler to inline stuff to get the same speeds of course, i.e. don't compare the speeds with optimizations off
jk
Switching optimisations on is indeed the key. C++ is designed to take advantage of compiler optimisations - if you don't use them, performance will definitely suffer.
anon
Or you could make the array be: int* arr = new int[size*size]; which would use a memory allocation. However don't include these setup costs in your timing unless it is relevant to what you want to measure.
Daemin
@Daemin that's what I was thinking. It's really the only fair comparison. Once the memory is allocated, it shouldn't matter whether it came from the stack or the heap, but yes: making a system call to allocate memory is going to be expensive.
San Jacinto
I highly doubt it takes 700ms to perform a single heap allocations.
jalf
@jalf: In a debug build? With a debug heap and checking iterators?
Sebastian
Yes. Checked iterators do not affect the time taken for heap allocations, which was what your post claimed. Of course *other* aspects of `std::vector` cause the slowdown in a debug build, but it is certainly not the single call to `new`.
jalf
Checked iterators can slow down access. C++ doesn't forbid `[]` from doing bounds checks (it requires `.at()` to perform them), and it's perfectly reasonable for a debug build to check.
David Thornley
The difference between stack and heap allocation should not be able to account for 718 milliseconds of time.
Omnifarious
Sounds great, too bad it's not true. 718ms to allocate a single allocation on a clean heap? The real answer is the operator[] is much slower in a vector.
Charles Eli Cheese
Edited the answer to include performance measurements.
Sebastian
+8  A: 

If you are compiling this with a Microsoft compiler, to make it a fair comparison you need to switch off iterator security checks and iterator debugging, by defining _SECURE_SCL=0 and _HAS_ITERATOR_DEBUGGING=0.

Secondly, the constructor you are using initialises each vector value with zero, and you are not memsetting the array to zero before filling it. So you are traversing the vector twice.

Try:

vector<int> v; 
v.reserve(size*size);
xcut
After `vector::reserve` you have to call `vector::push_back` to increase the vector's size. Using an unchecked `operator[]` would work, but it'd be evil. `vector::resize` would also initialize with 0.
Sebastian
+2  A: 

You are probably using VC++, in which case by default standard library components perform many checks at run-time (e.g whether index is in range). These checks can be turned off by defining some macros as 0 (I think _SECURE_SCL).

Another thing is that I can't even run your code as is: the automatic array is way too large for the stack. When I make it global, then with MingW 3.5 the times I get are 627 ms for the vector and 26875 ms (!!) for the array, which indicates there are really big problems with an array of this size.

As to this particular operation (filling with value 1), you could use the vector's constructor:

std::vector<int> v(size * size, 1);

and the fill algorithm for the array:

std::fill(arr, arr + size * size, 1);
visitor
A: 

When you declare the array, it lives in the stack (or in static memory zone), which it's very fast, but can't increase its size.

When you declare the vector, it assign dynamic memory, which it's not so fast, but is more flexible in the memory allocation, so you can change the size and not dimension it to the maximum size.

Khelben
+2  A: 

Change assignment to eg. arr[i*size+j] = i*j, or some other non-constant expression. I think compiler optimizes away whole loop, as assigned values are never used, or replaces array with some precalculated values, so that loop isn't even executed and you get 0 milliseconds.

Having changed 1 to i*j, i get the same timings for both vector and array, unless pass -O1 flag to gcc, then in both cases I get 0 milliseconds.

So, first of all, double-check whether your loops are actually executed.

el.pescado
A: 

When profiling code, make sure you are comparing similar things.

vector<int> v(size*size);

initializes each element in the vector,

int arr[size*size];

doesn't. Try

int arr[size * size];
memset( arr, 0, size * size );

and measure again...

DevSolar
I disagree - it is a flaw of `vector` that even with POD types, there is no way to avoid initialization in the case where you're going to manually set every element immediately afterwards. It is absolutely right that a benchmark of vector vs. array should show that array is faster in cases where you don't need zero-initialization. That said, in this case he's manually initializing all the values to 1, so it might be more fair to compare the array code as it is, against `vector<int> v(size*size,1);`
Steve Jessop
Have you tried `vector<int> v(0); v.resize( DESIRED_SIZE );`? It should result in an empty, zero-sized vector being assigned, which is then re-sized to DESIRED_SIZE, without any constructors / initialisation.
DevSolar
No, `resize` is really `void resize(size_type sz, T c = T())`. Same deal as the constructor, it initializes all the new values.
Steve Jessop
Are you absolutely positive about that? `resize()` changes `capacity()`, not `size()`...?!?
DevSolar
+2  A: 

To get a fair comparison I think something like the following should be suitable:

#include <sys/time.h>
#include <vector>
#include <iostream>
#include <algorithm>
#include <numeric>

int main()
{
  static size_t const size = 7e6;

  timeval start, end;
  int sum;

  {
    gettimeofday(&start, 0);
    std::vector<int> v(size, 1);
    sum = std::accumulate(v.begin(), v.end(), 0);
    gettimeofday(&end, 0);

    std::cout << "= vector =" << std::endl
          << "(" << end.tv_sec - start.tv_sec
          << " s, " << end.tv_usec - start.tv_usec
          << " us)" << std::endl
          << "sum = " << sum << std::endl << std::endl;
  }


  {
    gettimeofday(&start, 0);
    int * const arr =  new int[size];
    std::fill(arr, arr + size, 1);
    sum = std::accumulate(arr, arr + size, 0);
    delete [] arr;
    gettimeofday(&end, 0);

    std::cout << "= Simple array =" << std::endl
          << "(" << end.tv_sec - start.tv_sec
          << " s, " << end.tv_usec - start.tv_usec
          << " us)" << std::endl
          << "sum = " << sum << std::endl << std::endl;
  }

}

In both cases, dynamic allocation and deallocation is performed, as well as accesses to elements.

On my Linux box:

$ g++ -O2 foo.cpp 
$ ./a.out 
= vector =
(0 s, 62820 us)
sum = 7000000

= Simple array =
(0 s, 70012 us)
sum = 7000000

The std::vector<> case is consistently faster, albeit not by much. The point is that std::vector<> can be just as fast as a simple array if your code is structured appropriately.


On a related note switching off optimization makes a huge difference in this case:

$ g++ foo.cpp 
$ ./a.out 
= vector =
(0 s, 167749 us)
sum = 7000000

= Simple array =
(0 s, 83701 us)
sum = 7000000

Many of the optimization assertions made by folks like Neil and jalf are entirely correct.

HTH!

Void
A: 

Two things. One, operator[] is much slower for vector. Two, vector in most implementations will behave weird at times when you add in one element at a time. I don't mean just that it allocates more memory but it does some genuinely bizarre things at times.

The first one is the main issue. For a mere million bytes, even reallocating the memory a dozen times should not take long (it won't do it on every added element).

In my experiments, preallocating doesn't change its slowness much. When the contents are actual objects it basically grinds to a halt if you try to do something simple like sort it.

Conclusion, don't use stl or mfc vectors for anything large or computation heavy. They are implemented poorly/slowly and cause lots of memory fragmentation.

Charles Eli Cheese