views:

672

answers:

8

First of all, using delete for anything allocated with new[] is undefined behaviour according to C++ standard.

In Visual C++ 7 such pairing can lead to one of the two consequences.

If the type new[]'ed has trivial constructor and destructor VC++ simply uses new instead of new[] and using delete for that block works fine - new just calls "allocate memory", delete just calls "free memory".

If the type new[]'ed has a non-trivial constructor or destructor the above trick can't be done - VC++7 has to invoke exactly the right number of destructors. So it prepends the array with a size_t storing the number of elements. Now the address returned by new[] points onto the first element, not onto the beginning of the block. So if delete is used it only calls the destructor for the first element and the calls "free memory" with the address different from the one returned by "allocate memory" and this leads to some error indicaton inside HeapFree() which I suspect refers to heap corruption.

Yet every here and there one can read false statements that using delete after new[] leads to a memory leak. I suspect that anything size of heap corruption is much more important than a fact that the destructor is called for the first element only and possibly the destructors not called didn't free heap-allocated sub-objects.

How could using delete after new[] possibly lead only to a memory leak on some C++ implementation?

+3  A: 

If the non-trivial destructor that are not called for all but the first element in the array are supposed to free some memory you get a memory leak as these objects are not cleaned up properly.

Martin Liversage
+2  A: 

Apart from resulting in undefined behavior, the most straightforward cause of leaks lies in the implementation not calling the destructor for all but the first object in the array. This will obviously result in leaks if the objects have allocated resources.

This is the simplest possible class I could think of resulting in this behaviour:

 struct A { 
       char* ch;
       A(): ch( new char ){}
       ~A(){ delete ch; }
    };

A* as = new A[10]; // ten times the A::ch pointer is allocated

delete as; // only one of the A::ch pointers is freed.

PS: note that constructors fail to get called in lots of other programming mistakes, too: non-virtual base class destructors, false reliance on smart pointers, ...

xtofl
@Suma: the problem I tried to show here is how only the destructor of the first object is called, resulting in 9 leaked blocks containing 1 `char`. You are right about the array of `A` elements, but that wasn't the question.
xtofl
@Suma: no harm in pointing out that the explanation _was_ a little hidden. Thanks for being critical, we _need_ that!
xtofl
+2  A: 

It will lead to a leak in ALL implementations of C++ in any case where the destructor frees memory, because the destructor never gets called.

In some cases it can cause much worse errors.

Charles Eli Cheese
+15  A: 

Suppose I'm a C++ compiler, and I implement my memory management like this: I prepend every block of reserved memory with the size of the memory, in bytes. Something like this;

| size | data ... |
         ^
         pointer returned by new and new[]

Note that, in terms of memory allocation, there is no difference between new and new[]: both just allocate a block of memory of a certain size.

Now how will delete[] know the size of the array, in order to call the right number of destructors? Simply divide the size of the memory block by sizeof(T), where T is the type of elements of the array.

Now suppose I implement delete as simply one call to the destructor, followed by the freeing of the size bytes, then the destructors of the subsequent elements will never be called. This results in leaking resources allocated by the subsequent elements. Yet, because I do free size bytes (not sizeof(T) bytes), no heap corruption occurs.

Thomas
Thumbs up. As you just said, the OP is assuming new and new[] are handled differently, but this may not be the case. "new" may just be "new[]" with a size_t prepended w/ a value of 1.
Merlyn Morgan-Graham
I actually meant `size` to indicate the number of bytes, not elements. Something that a function like `malloc` could do. I'll edit my post a bit to make this explicit.
Thomas
If that memory management technique was used. BUT then you have an overhead of x bytes to hold size, an increase of 100% for small objects. Yes we could pay that cost if we wanted to compensate for bad programmers. But I don't want to pay that price just to support 'sharptooth' so I would prefer that the memory management is very efficient (even for small types). As a result the standard does not require and most implementations do not prepend the size for new in the release version. Though some do in the debug version just to help in debugging/profiling.
Martin York
@Thomas: Yes, but this is very artificial, forced and never-used-in-practice approach to implementaing memory management. It certainly cannot serve as an explanation as to how the popular "memory leak" legend came to be.
AndreyT
+2  A: 

memory leak might happen if new() operator is overridden but new[] is not. same goes to the delete / delete[] operator

YeenFei
+2  A: 

It seems that your question is really "why heap corruption doesn't happen?". The answer to that one is "because the heap manager keeps track of allocated block sizes". Let's go back to C for a minute: if you want to allocate a single int in C you would do int* p = malloc(sizeof(int)), if you want to allocate array of size n you can either write int* p = malloc(n*sizeof(int)) or int* p = calloc(n, sizeof(int)). But in any case you'll free it by free(p), no matter how you allocated it. You never pass size to free(), free() just "knows" how much to free, because the size of a malloc()-ed block is saved somewhere "in front" of the block. Back to C++, new/delete and new[]/delete[] are usually implemented in terms of malloc (although they don't have to be, you shouldn't rely on that). This is why new[]/delete combination doesn't corrupt the heap - delete will free the right amount of memory, but, as explained by everyone before me, you can get leaks by not calling the right number of destructors.

That said, reasoning about undefined behavior in C++ is always pointless exercise. Why does it matter if new[]/delete combination happens to work, "only" leaks or causes heap corruption? You shouldn't code like that, period! And, in practice, I would avoid manual memory management whenever possible - STL & boost are there for a reason.

sbk
+5  A: 

The fairy tale about mixing new[] and delete allegedly causing a memory leak is just that: a fairy tale. It has absolutely no footing in reality. I don't know where it came from, bit by now it acquired a life of its own and survives like a virus, propagating by the word of mouth from one beginner to another.

The most likely rationale behind this "memory leak" nonsense is that from the innocently naive point of view the difference between delete and delete[] is that delete is used to destroy just one object, while delete[] destroys an array of objects ("many" objects). A naive conclusion that is usually derived from this is that the first element of the array will be destroyed by delete, while the rest will persist, thus creating the alleged "memory leak". Of course, any programmer with at least basic understanding of typical heap implementations would immediately understand that the most likely consequence of that is heap corruption, not a "memory leak".

Another popular explanation for the naive "memory leak" theory is that since the wrong number of destructors gets called, the secondary memory owned by the objects in the array does not get deallocated. This might be true, but it is obviously a very forced explanation, which bears little relevance in the face of much more serious problem with heap corruption.

In short, mixing different allocation functions is one of those error that lead to solid, unpredictable and very practical undefined behavior. Any attempts to impose some concrete limits on the manifestations of this undefined behavior are just waste of time and sure sign of the lack of basic understanding.

Needless to add, new/delete and new[]/delete[] are in fact two independent memory management mechanisms, which are independently customizable. Once they get customized (by replacing raw memory management functions) there's absolutely no way to even begin to predict what might happen if they get mixed.

AndreyT
+1  A: 

Late for an answer, but...

If your delete mechanism is simply to call the destructor and put the freed pointer, together with the size implied by sizeof, onto a free stack, then calling delete on a chunk of memory allocated with new[] will result memory being lost -- but not corruption. More sophisticated malloc structures could corrupt on, or detect, this behaviour.

chrispy