views:

464

answers:

8

Why is it a problem if we have a huge piece of code between new and delete of a char array.

Example

void this_is_bad() /* You wouldn't believe how often this kind of code can be found */
{
  char *p = new char[5];    /* spend some cycles in the memory manager */
  /* do some stuff with p */
  delete[] p;      /* spend some more cycles, and create an opportunity for a leak */
}
A: 

I would argue that it's not a problem to have huge amounts of code between a new and delete of any variable. The idea of using new is to place a value on the heap and hence keep it alive for long periods of time. Having code execute on these values is an expected operation.

What can get you into trouble when you have huge amounts of code within the same method between a new and delete is the chance of accidentally leaking the variable due to ...

  • An exception being thrown
  • Methods that get so long you can't see the begining or end and hence people start arbitrarily returning from the middle without realizing they skipped a delete call

Both of these can be fixed by using an RAII type such as std::vector<char> instead of a new / delete pair.

JaredPar
Usually, I'd rather use an std::string than an std::vector<char>.
Brian
@Brian, When treating as an array type I prefer vector for consistency but can't argue against any RAII type over manual new/delete.
JaredPar
@Brian: Have to agree with Jared. It may not be a string.
Martin York
+4  A: 

The article you reference is making the point that

 char p[5];

would be just as effective in this case and have no danger of a leak.

In general, you avoid leaks by making the life cycle of allocated memory very clear, the new and the delete can be seen to be related.

Large separation between the two is harder to check, and needs to consider carefully whether there are any ways out of the code that might dodge the delete.

djna
+1  A: 

The link (and source) of that code is lamenting the unnecessary use of the heap in that code. For a constant, and small, amount of memory there's no reason not to allocate it on the stack.

Instead:

void this_is_good()
{
   /* Avoid allocation of small temporary objects on the heap*/
   char p[5];    /* Use the stack instead */
   /* do some stuff */
}

There's nothing inherently wrong with the original code though, its just less than optimal.

Kevin Montrose
I like this. Then I can use a pointer to "p" in the stack for avoiding array duplication if I'm passing it as a reference.
yelinna
pointers to stack variables give me the heeby jeebies. You have to make sure they don't find themselves into any heap allocated objects that live past the function returning; which gets hard once you start passing them as parameters.
Kevin Montrose
+7  A: 

Because somebody may throw an exception.
Because somebody may add a return.

If you have lots of code between the new and delete you may not spot that you need to deallocate the memory before the throw/return?

Why do you have a RAW pointer in your code.
Use a std::vector.

Martin York
A: 

It isn't, assuming that the "huge piece of code":

  1. Always runs "delete [] p;" before calling "p = new char[size];" or "throw exception;"
  2. Always runs "p = 0;" after calling "delete [] p;"

Failure to meet the first condition, will cause the contents of p to be leaked. Failure to meet the second condition may result in a double-delete. In general, it is best to use std::vector, so as to avoid any problems.

Michael Aaron Safyan
Point 2. Is worthless. It is a hangover from C. With C++ and scope and smart pointers this is rarly needed.
Martin York
Point 2 isn't worthless. Try running "delete [] p;", then "delete [] p;" again. The delete operator does not set p to 0.
Michael Aaron Safyan
Also, as I said it is best to use std::vector for this kind of thing. (For single objects, boost::shared_ptr is the best way to go, but scoped_ptr and shared_ptr don't apply to arrays).
Michael Aaron Safyan
With std::vector<>, or smart pointers, you don't have to worry about either point. With new[] and delete[], you have to worry about both. I don't see why point 2 would be invalid but not point 1.
David Thornley
A: 

Are you asking if this would be better?

void this_is_great()
{
   char* p = new char[5];
   delete[] p;
   return;
 }

It's not.

AShelly
A: 

That particular example isn't stating that having a bunch of code in between a new and delete is necessarily bad; it stating that if there are ways to write code that don't use the heap, you might want to prefer that to avoid heap corruption.

It's decent enough advice; if you reduce the amount you use the heap, you know where to look when the heap is corrupted.

MSN
+1  A: 

Next to all interesting answers about the heap, and about having new and delete occur close to each other, I might add that the sheer fact of having a huge amount of code in one function is to be avoided. If the huge amount of code separates two related lines of code, it's even worse.

I would differentiate between 'amount of work' and 'amount of code':

void do_stuff( char* const p );

void formerly_huge_function() {
   char* p = new char[5];
   CoInitialize( NULL );
   do_stuff( p );
   CoUninitialize();
   delete[] p;
}

Now do_stuff can do a lot of things without interfering with the allocation problem. But also other symmetrical stuff stays together this way.

It's all about the guy who's going to maintain your code. It might be you, in a month.

xtofl
Except that initialization and finalization routines are bad, having low cohesion and high coupling. Your point about function length is right on, though.
David Thornley
@David: indeed, symmetrical routines can, in C++, be better solved using RAII. Some languages/libraries/platforms don't have that, though.
xtofl