tags:

views:

98

answers:

3

I have a function which grows an array when trying to add an element if it is full. Which of the execution blocks is better or faster?

I think my second block (commented out) may be wrong, because after doubling my array I then go back and point to the original.

When creating arrays does the compiler look for a contiguous block in memory which it entirely fits into? (On the stack/heap? I don't fully understand which, though it is important for me to learn it is irrelevant to the actual question.)

If so, would this mean using the second block could potentially overwrite other information by overwriting adjacent memory? (Since the original would use 20 adjacent blocks of memory, and the latter 40.)

Or would it just mean the location of elements in my array would be split, causing poor performance?

void Grow()
{
  length *= 2; // double the size of our stack

  // create temp pointer to this double sized array
  int* tempStack = new int[length];

  // loop the same number of times as original size
  for(int i = 0; i < (length / 2); i++) 
  {
    // copy the elements from the original array to the temp one
    tempStack[i] = myStack[i]; 
  }

  delete[] myStack; //delete the original pointer and free the memory
  myStack = tempStack; //make the original point to the new stack

  //Could do the following - but may not get contiguous memory block, causing
  // overwritten >data
#if 0
  int* tempStack = myStack; //create temp pointer to our current stack
  delete[] myStack; //delete the original pointer and free memory
  myStack = new int[length *= 2]; //delete not required due to new?
  myStack = tempStack;
#endif
}
+3  A: 

The second block wouldn't accomplish what you want at all.

When you do

myStack = new int[length *= 2];

then the system will return a pointer to wherever it happens to allocate the new, larger array.

You then reassign myStack to the old location (which you've already de-allocated!), which means you're pointing at memory that's not allocated (bad!) and you've lost the pointer to the new memory you just allocated (also bad!).

Edit: To clarify, your array will be allocated on the heap. Additionally, the (new) pointer returned by your larger array allocation (new int[foo]) will be a contiguous block of memory, like the old one, just probably in a different location. Unless you go out of bounds, don't worry about "overwriting" memory.

Sapph
I believe the first example is the way you want to do it. :)
Sapph
Well thats odd, because when I run it, and use getSize() (just stack size, not num elements) it tells me I have a stack size 40 (the original was 20). Perhaps I just misinterpret what you are saying. I'll stick with the first, thanks.
That's because your "length" variable is appropriately upgraded to 40 (note the `length *= 2`... that will set length for you). Your pointer will be wrong though, and I'm pretty sure if you try to access data, you'll segfault or get undefined results. :(
Sapph
The second block doesn't segfault because that memory is still allocated to the process, but reading from it is indeed undefined and leaks the allocation that was just stored in myStack.
Roger Pate
+1  A: 

Your second block is incorrect because of this sequence:

int* tempStack = myStack; //create temp pointer to our current stack
delete[] myStack; //delete the original pointer and free memory

tempStack and myStack and both simply pointers to the same block of memory. When you delete[] the pointer in the second line, you no longer have access to that memory via either pointer.

Using C++ memory management, if you want to grow an array, you need to create a new array before you delete the old one and copy over the values yourself.

That said, since you are working with POD, you could use C style memory management which supports directly growing an array via realloc. That can be a bit more efficient if the memory manager realizes it can grow the buffer without moving it (although if it can't grow the buffer in place, it will fall back on the way you grow your array in your first block).

C style memory management is only okay for arrays of POD. For non-POD, you must do the create new array/copy/delete old array technique.

R Samuel Klatchko
I've never done anything in C so I guess I'll stick with the copying. I was just wondering if it could be avoided the way I did it, but obviously not. Thanks for the info.
+2  A: 

This doesn't exactly answer your question, but you shouldn't be doing either one. Generally new[] or delete[] should be avoided in favor of using std::vector. new[] is hard to use because it requires explicit memory management (if an exception is thrown as the elements are being copied, you will need to catch the exception and delete the array to avoid a memory leak). std::vector takes care of this for you, automatically grows itself, and is likely to have an efficient implementation tuned by the vendor.

One argument for using explicit arrays is to have a contiguous block of memory that can be passed to C functions, but that also can be done with std::vector for any non-pathological implementation (and the next version of the C++ standard will require all conforming implementations to support that). (For reference, see http://www.gotw.ca/publications/mill10.htm by Herb Sutter, former convener of the ISO C++ standards committee.)

Another argument against std::vector is the weirdness with std::vector<bool>, but if you need that you can simply use std::vector<char> or std::vector<int> instead. (See: http://www.gotw.ca/publications/mill09.htm)

jamesdlin
Or use std::deque<bool> instead of std::vector<bool>.
David Thornley