tags:

views:

524

answers:

5

I know that when delete [] will cause destruction for all array elements and then releases the memory.

I initially thought that compiler wants it just to call destructor for all elements in the array ,but i have also a counter - argument for that which is:

Heap memory allocator must know the size of bytes allocated and using sizeof(Type) its possible to find no of elements and to call appropriate no of destructors for an array to prevent resource leaks.

So my assumption is correct or not and please clear my doubt on it.

So i am not getting the usage of [] in delete [] ?

+23  A: 

Scott Meyers says in his Effective C++ book: Item 5: Use the same form in corresponding uses of new and delete.

The big question for delete is this: how many objects reside in the memory being deleted? The answer to that determines how many destructors must be called.

Does the pointer being deleted point to a single object or to an array of objects? The only way for delete to know is for you to tell it. If you don't use brackets in your use of delete, delete assumes a single object is pointed to.

Also, the memory allocator might allocate more space that required to store your objects and in this case dividing the size of the memory block returned by the size of each object won't work.

Depending on the platform, the _msize (windows), malloc_usable_size (linux) or malloc_size (osx) functions will tell you the real length of the block that was allocated. This information can be exploited when designing growing containers.

Another reason why it won't work is that Foo* foo = new Foo[10] calls operator new[] to allocate the memory. Then delete [] foo; calls operator delete[] to deallocate the memory. As those operators can be overloaded, you have to adhere to the convention otherwise delete foo; calls operator delete which may have an incompatible implementation with operator delete []. It's a matter of semantics, not just keeping track of the number of allocated object to later issue the right number of destructor calls.

See also:

[16.14] After p = new Fred[n], how does the compiler know there are n objects to be destructed during delete[] p?

Short answer: Magic.

Long answer: The run-time system stores the number of objects, n, somewhere where it can be retrieved if you only know the pointer, p. There are two popular techniques that do this. Both these techniques are in use by commercial-grade compilers, both have tradeoffs, and neither is perfect. These techniques are:


EDIT: after having read @AndreyT comments, I dug into my copy of Stroustrup's "The Design and Evolution of C++" and excerpted the following:

How do we ensure that an array is correctly deleted? In particular, how do we ensure that the destructor is called for all elements of an array?

...

Plain delete isn't required to handle both individual objects an arrays. This avoids complicating the common case of allocating and deallocating individual objects. It also avoids encumbering individual objects with information necessary for array deallocation.

An intermediate version of delete[] required the programmer to specify the number of elements of the array.

...

That proved too error prone, so the burden of keeping track of the number of elements was placed on the implementation instead.

As @Marcus mentioned, the rational may have been "you don't pay for what you don't use".


EDIT2:

In "The C++ Programming Language, 3rd edition", §10.4.7, Bjarne Stroustrup writes:

Exactly how arrays and individual objects are allocated is implementation-dependent. Therefore, different implementations will react differently to incorrect uses of the delete and delete[] operators. In simple and uninteresting cases like the previous one, a compiler can detect the problem, but generally something nasty will happen at run time.

The special destruction operator for arrays, delete[], isn’t logically necessary. However, suppose the implementation of the free store had been required to hold sufficient information for every object to tell if it was an individual or an array. The user could have been relieved of a burden, but that obligation would have imposed significant time and space overheads on some C++ implementations.

Gregory Pakosz
AAT
Scott's *explanation* is not really an explanation at all. It is a mere asserion of the fact, cunningly passed off as an "explanation". He says that the only way to know whether it is an array or a single object is to ask the user. This is, of course, incorrect. It is perfectly possible to store that information in `new[]` and then retrieve in `delete`, just like it is done now with element count. The decision was made against it, because it would overly complicated "household" information structure and branching in `delete`. There's no "beatiful" explanation, it was simply *decided* that way.
AndreyT
@Jason: No, Scott's explanation is utterly bogus. Again, how do you know how many elements are in the array when you do `delete[]`? Huh? Do you have to tell it to `delete[]` yourself? No. `delete[]` "knows" it becuase it uses household information prepared by `new[]`. In exacty the same way we could force `new`\`new[]` to store additional household information of "array or not" nature. It is easy and obvious. That's the answer to your question.
AndreyT
@AndreyT: Sound more like a don't-pay-for-what-you-don't-use rather than an arbitrary decision.
Marcus Lindblom
The real reason for the differnce is given in my answer. It is more of a design decision. And a wise design decision, I agree. As for Scott's "the only way to know ..." - as an "explanation" is utter nonsense. Actually, I'm pretty sure Scott intended it to be a *desription* of the current state of affairs, not an *explanation* of the rationale behind it. Yes, some people keep mistaking it for an explanation.
AndreyT
@Marcus: Of course it wasn't arbitrary. It is just the rationale given in this answer is fake.
AndreyT
@Jason: `new` vs. `new[]` could tag the allocated block, or simply always store the count. What is correct is that the compiler can't distinguish at destruction time without additional information. Still, Scotts explanation *is* an explanation if you include the "you only pay for what you use" design principle of C++.
peterchen
@AndreyT: thanks for making me look even deeper at this
Gregory Pakosz
@peterchen and @Gregory: If the "you don't pay for what you don't use" was the answer, then we'd still have to supply the array size to `delete[]` explicitly. As Stroustrup clearly states in his book, following "you don't pay..." principle is in many cases too error-prone, which is why `delete[]` is an example of a mayor deviation from that principle: you *do* pay for element count, even though in 99 cases out of 100 you'll store it yourself as well (so almost alwas the count is actually stored *twice*).
AndreyT
No, the real reason of the separation of `delete` and `delete[]` was to separate the exclusive polymorphic properties of `delete`. I remember it was stated more than once in `comp.*.c++` before, but I don't have the link. Stroustrup in the above quote simply doesn't even try to answer our question. What you quoted is the answer to "why `new[]` stores the number of elements instead of expecting the user to supply it to `delete[]`?", not to the question about why `delete` and `delete[]` are separate.
AndreyT
please, if you can find back links to com.*.c++ or comp.*.c++.moderated, I'm eager to learn more
Gregory Pakosz
I edited with another citation from Bjarne Stroustrup
Gregory Pakosz
+3  A: 

The heap itself knows the size of allocated block - you only need the address. Look like free() works - you only pass the address and it frees memory.

The difference between delete (delete[]) and free() is that the former two first call the destructors, then free memory (possibly using free()). The problem is that delete[] also has only one argument - the address and having only that address it need to know the number of objects to run destructors on. So new[] uses som implementation-defined way of writing somewhere the number of elements - usually it prepends the array with the number of elements. Now delete[] will rely on that implementation-specific data to run destructors and then free memory (again, only using the block address).

sharptooth
A: 

This is more complicated.

The keyword and the convention to use it to delete an array was invented for the convenience of implementations, and some implementations do use it (I don't know which though. MS VC++ does not).

The convenience is this:

In all other cases, you know the exact size to be freed by other means. When you delete a single object, you can have the size from compile-time sizeof(). When you delete a polymorphic object by base pointer and you have a virtual destructor, you can have the size as a separate entry in vtbl. If you delete an array, how would you know the size of memory to be freed, unless you track it separately?

The special syntax would allow tracking such size only for an array - for instance, by putting it before the address that is returned to the user. This takes up additional resources and is not needed for non-arrays.

Pavel Radzivilovsky
It's about tracking the number of elements that need their destructors running, not the amount of memory allocated.
Joe Gauterin
+1  A: 

delete[] just calls a different implementation (function);

There's no reason an allocator couldn't track it (in fact, it would be easy enough to write your own).

I don't know the reason they did not manage it, or the history of the implementation, if I were to guess: Many of these 'well, why wasn't this slightly simpler?' questions (in C++) came down to one or more of:

  1. compatibility with C
  2. performance

In this case, performance. Using delete vs delete[] is easy enough, I believe it could all be abstracted from the programmer and be reasonably fast (for general use). delete[] only requires only a few additional function calls and operations (omitting destructor calls), but that is per call to delete, and unnecessary because the programmer generally knows the type he/she is dealing with (if not, there's likely a bigger problem at hand). So it just avoids calling through the allocator. Additionally, these single allocations may not need to be tracked by the allocator in as much detail; Treating every allocation as an array would require additional entries for count for trivial allocations, so it is multiple levels of simple allocator implementation simplifications which are actually important for many people, considering it is a very low level domain.

Justin
+5  A: 

The main reason why it was decided to keep separate delete and delete[] is that these two entities are not as similar as it might seem at the first sight. For a naive observer they might seem to be almost the same: just destruct and deallocate, with the only difference in the potential number of objects to process. In reality, the difference is much more significant.

The most important difference between the two is that delete might perform polymorphic deletion of objects, i.e. the static type of the object in question might be different from its dynamic type. delete[] on the other hand must deal with strictly non-polymorphic deletion of arrays. So, internally these two entities implement logic that is significantly different and non-intersecting between the two. Because of the possibility of polymorphic deletion, the functionality of delete is not even remotely the same as the functionality of delete[] on an array of 1 element, as a naive observer might incorrectly assume initially.

Contrary to the strange claims made in some other answers, it is, of course, perfectly possible to replace delete and delete[] with just a single construct that would branch at the very early stage, i.e. it would determine the type of the memory block (array or not) using the household information that would be stored by new/new[], and then jump to the appropriate functionality, equivalent to either delete or delete[]. However, this would be a rather poor design decision, since, once again, the functionality of the two is too different. Forcing both into a single construct would be akin to creating a Swiss Army Knife of a deallocation function. Also, in order to be able to tell an array from a non-array we'd have to introduce an additional piece of household information even into a single-object memory allocations (done with plain new). This might easily result in notable memory overhead in single object allocations.

But, once again, the main reason here is the functional difference between delete and delete[]. These language entities possess only apparent skin-deep similarity that exists only at the level of naive specification ("destruct and free memory"), but once one gets to understand in detail what these entities really have to do one realizes that they are too different to be merged into one.

P.S. This is BTW one of the problems with the suggestion about sizeof(type) you made in the question. Because of the potentially polymorphic nature of delete, you don't know the type in delete, which is why you can't obtain any sizeof(type). There are more problems with this idea, but that one is already enough to explain why it won't fly.

AndreyT