tags:

views:

105

answers:

3

Is there significant cpu/memory overhead associated with using automatic arrays with g++/Intel on 64-bit x86 linux platform?

int function(int N) {
    double array[N];
  • overhead compared to allocating array before hand (assuming function is called multiple times)

  • overhead compared to using new

  • overhead compared to using malloc

The range of N may be from 1kb to 16kb roughly, stack overrun is not a problem.

+5  A: 

The difference in performance between a VLA and a statically-sized array should be negligible. You may need a few extra instructions to calculate how much to grow the stack but that should be noise in any real program.

Hmm, on further thought, there could also be some overhead depending on how the local variables are layed out in memory and whether there are multiple VLAs.

Consider the case where you have the locals (and assume they are put in memory in the order they are specified).

int x;
int arr1[n];
int arr2[n];

Now, whenever you need to access arr2, the code needs to calculate the location of arr2 relative to your base pointer.

R Samuel Klatchko
thank you. that was my gut feeling, just wanted to be doubly sure. luckily, I only have to worry about single VLA array
aaa
A: 
  • Review the assembly output
  • Profile it, for your application
  • Check your memory usage
Paul Nathan