views:

458

answers:

16

As mentioned in this answer simply calling the destructor for the second time is already undefined behavior 12.4/14(3.8).

For example:

class Class {
public:
    ~Class() {}
};
// somewhere in code:
{
    Class* object = new Class();
    object->~Class();
    delete object; // UB because at this point the destructor call is attempted again
}

In this example the class is designed in such a way that the destructor could be called multiple times - no things like double-deletion can happen. The memory is still allocated at the point where delete is called - the first destructor call doesn't call the ::operator delete() to release memory.

For example, in Visual C++ 9 the above code looks working. Even C++ definition of UB doesn't directly prohibit things qualified as UB from working. So for the code above to break some implementation and/or platform specifics are required.

Why exactly would the above code break and under what conditions?

A: 

By definition, the destructor 'destroys' the object and destroy an object twice makes no sense.

Your example works but its difficult that works generally

jab
+2  A: 

The following Class will crash in Windows on my machine if you'll call destructor twice:

class Class {
public:
    Class()
    {
        x = new int;
    }
    ~Class() 
    {
        delete x;
        x = (int*)0xbaadf00d;
    }

    int* x;
};

I can imagine an implementation when it will crash with trivial destructor. For instance, such implementation could remove destructed objects from physical memory and any access to them will lead to some hardware fault. Looks like Visual C++ is not one of such sort of implementations, but who knows.

Kirill V. Lyadvinsky
I believe that even without testing - when `delete` will be called on the invalid pointer it will crash. But in my example the destructor is trivial.
sharptooth
that's not due to double-calling a destructor, it's due to double-deleting x
Carson Myers
@Carson Myers: That's not double-deleting x, it's deleting x the first time and deleting 0xbaadf00d the second time.
sharptooth
@sharptooth, updated my answer for trivial destructor.
Kirill V. Lyadvinsky
I suppose, same basic effect though.
Carson Myers
Do you mean that the object will be unmapped from the address space of the program yet memory will not be "freed" until `operator delete` is called? Then can I use `operator new` for raw memory allocation for my purposes?
sharptooth
Can you elaborate on what you mean with "remove destructed objects from physical memory"?
Andreas Brinck
I suppose he meant writing `0xbaadf00d` all over the memory that was previously allocated to this object after the destructor ran to be sure that it's not used any longer even if not freed.
Matthieu M.
@Andreas Brinck, I meant that the virtual address of the object will not be mapped to any physical address anymore.
Kirill V. Lyadvinsky
+2  A: 

Standard 12.4/14

Once a destructor is invoked for an object, the object no longer exists; the behavior is undefined if the destructor is invoked for an object whose lifetime has ended (3.8).

I think this section refers to invoking the destructor via delete. In other words: The gist of this paragraph is that "deleting an object twice is undefined behavior". So that's why your code example works fine.

Nevertheless, this question is rather academic. Destructors are meant to be invoked via delete (apart from the exception of objects allocated via placement-new as sharptooth correctly observed). If you want to share code between a destructor and second function, simply extract the code to a separate function and call that from your destructor.

Adrian Grigore
That paragraph means exactly what it says, and destructors are often invoked without using delete - either for object on the stack or via explicit destructor call.
Joe Gauterin
This is more or less exactly my answer to the original question (linked to from this question), this question is about *why* an implementation would break (to which the answer isn't: "because the standard says so")
Andreas Brinck
There are legitimate reason to explicitly call a destructor so your last paragraph is meaningless.
Martin York
Actually, if you allocate memory and call placement-new, you'll have to call the destructor explictly. The question is more about how "the object no longer exists" when the memory is still allocated.
sharptooth
@Joe Gauterin: You are right, I did not mind stack variables.@Joe Gauterin and Martin York: Why would you want to explicitly call a destructor?
Adrian Grigore
@Adrian Grigore: You'll need to explicitly call the destructor if you created object with placement-new.
sharptooth
Good point. I never used placement-new. You live and learn... :-)
Adrian Grigore
A: 

I guess it's been classified as undefined because most double deletes are dangerous and the standards committee didn't want to add an exception to the standard for the relatively few cases where they don't have to be.

As for where your code could break; you might find your code breaks in debug builds on some compilers; many compilers treat UB as 'do the thing that wouldn't impact on performance for well defined behaviour' in release mode and 'insert checks to detect bad behaviour' in debug builds.

Joe Gauterin
A: 

Basically, as already pointed out, calling the destructor a second time will fail for any class destructor that performs work.

ChrisBD
+8  A: 

I think your question aims at the rationale behind the standard. Think about it the other way around:

  1. Defining the behavior of calling a destructor twice creates work, possibly a lot of work.
  2. Your example only shows that in some trivial cases it wouldn't be a problem to call the destructor twice. That's true but not very interesting.
  3. You did not give a convincing use-case (and I doubt you can) when calling the destructor twice is in any way a good idea / makes code easier / makes the language more powerful / cleans up semantics / or anything else.

So why again should this not cause undefined behavior?

Sebastian
Actually I asked why that code could possibly break.
sharptooth
@sharptooth: how is that relevant though? The rationale for the standard is not "we can imagine an implementation where this would break", but simply "we're making everyone's lives easier, and reducing the scope for programmer error, by telling you to write consistent code".
jalf
A: 

The reason is because your class might be for example a reference counted smart pointer. So the destructor decrements the reference counter. Once that counter hits 0 the actual object should be cleaned up.

But if you call the destructor twice then the count will be messed up.

Same idea for other situations too. Maybe the destructor writes 0s to a piece of memory and then deallocates it (so you don't accidentally leave a user's password in memory). If you try to write to that memory again - after it has been deallocated - you will get an access violation.

It just makes sense for objects to be constructed once and destructed once.

ProgramMax
A: 

It's undefined behavior because the standard made it clear what a destructor is used for, and didn't decide what should happen if you use it incorrectly. Undefined behavior doesn't necessarily mean "crashy smashy," it just means the standard didn't define it so it's left up to the implementation.

While I'm not too fluent in C++, my gut tells me that the implementation is welcome to either treat the destructor as just another member function, or to actually destroy the object when the destructor is called. So it might break in some implementations but maybe it won't in others. Who knows, it's undefined (look out for demons flying out your nose if you try).

Carson Myers
An object's destructor NEVER destroys that object -- it merely cleans it up before its memory is reclaimed by other means (for example via `operator delete` if it was a dynamically allocated object).
FredOverflow
+2  A: 

Destructors are not regular functions. Calling one doesn't call one function, it calls many functions. Its the magic of destructors. While you have provided a trivial destructor with the sole intent of making it hard to show how it might break, you have failed to demonstrate what the other functions that get called do. And neither does the standard. Its in those functions that things can potentially fall apart.

As a trivial example, lets say the compiler inserts code to track object lifetimes for debugging purposes. The constructor [which is also a magic function that does all sorts of things you didn't ask it to] stores some data somewhere that says "Here I am." Before the destructor is called, it changes that data to say "There I go". After the destructor is called, it gets rid of the information it used to find that data. So the next time you call the destructor, you end up with an access violation.

You could probably also come up with examples that involve virtual tables, but your sample code didn't include any virtual functions so that would be cheating.

Dennis Zickefoose
+4  A: 

When you use the facilities of C++ to create and destroy your objects, you agree to use its object model, however it's implemented.

Some implementations may be more sensitive than others. For example, an interactive interpreted environment or a debugger might try harder to be introspective. That might even include specifically alerting you to double destruction.

Some objects are more complicated than others. For example, virtual destructors with virtual base classes can be a bit hairy. The dynamic type of an object changes over the execution of a sequence of virtual destructors, if I recall correctly. That could easily lead to invalid state at the end.

It's easy enough to declare properly named functions to use instead of abusing the constructor and destructor. Object-oriented straight C is still possible in C++, and may be the right tool for some job… in any case, the destructor isn't the right construct for every destruction-related task.

Potatoswatter
I had added an answer that touches some of the same terms. You do recall correctly: the dynamic type of the object changes from the most derived to the root of the hierarchy during the execution of the destructors sequence.
David Rodríguez - dribeas
+1 for destructors. In GCC destructors indeed sometimes rewrite vcall offsets and pointers to vtables; this leads to a broken state at the end. The destructed object looks then like it was disassembled into small pieces, and can no longer behave as a whole.
Pavel Shved
+6  A: 

The reason for the formulation in the standard is most probably that everything else would be vastly more complicated: it’d have to define when exactly double-deleting is possible (or the other way round) – i.e. either with a trivial destructor or with a destructor whose side-effect can be discarded.

On the other hand, there’s no benefit for this behaviour. In practice, you cannot profit from it because you can’t know in general whether a class destructor fits the above criteria or not. No general-purpose code could rely on this. It would be very easy to introduce bugs that way. And finally, how does it help? It just makes it possible to write sloppy code that doesn’t track life-time of its objects – under-specified code, in other words. Why should the standard support this?


Will existing compilers/runtimes break your particular code? Probably not – unless they have special run-time checks to prevent illegal access (to prevent what looks like malicious code, or simply leak protection).

Konrad Rudolph
I understand that the Standard doen't want to support that and names it UB. But in what conditions would that code with a trivial destructor break?
sharptooth
@sharptooth: See update. Notice that I can *easily* imagine such run-time checks. Code analysis tools (like Valgrind) will probably complain, too (if you count that as “break” – I do).
Konrad Rudolph
I see. How could such checks help against malicious code?
sharptooth
@sharptooth: It probably doesn’t. But double delete is (per the specs) an illegal memory access and there may be a blanket check for such accesses in place, since other illegal memory accesses *can* enable malicious code.
Konrad Rudolph
A: 

The reason is that, in absence of that rule, your programs would become less strict. Being more strict--even when it's not enforced at compile-time--is good, because, in return, you gain more predictability of how program will behave. This is especially important when the source code of classes is not under your control.

A lot of concepts: RAII, smart pointers, and just generic allocation/freeing of memory rely on this rule. The amount of times the destructor will be called (one) is essential for them. So the documentation for such things usually promises: "Use our classes according to C++ language rules, and they will work correctly!"

If there wasn't such a rule, it would state as "Use our classes according to C++ lanugage rules, and yes, don't call its destructor twice, then they will work correctly." A lot of specifications would sound that way. The concept is just too important for the language in order to skip it in the standard document.

This is the reason. Not anything related to binary internals (which are described in Potatoswatter's answer).

Pavel Shved
RAII, smart pointers, and the like, can all be implemented in an environment where destructors have well defined behavior when called twice. It would simply require additional work when implementing them.
Dennis Zickefoose
@Dennis, while implementing them--and while implementing the whole load of other classes. That's why there's the rule--it's convenient, fruitful and saves you from unnecessary work!
Pavel Shved
+5  A: 

The object no longer exists after you call the destructor.

So if you call it again, you're calling a method on an object that doesn't exist.

Why would this ever be defined behavior? The compiler may choose to zero out the memory of an object which has been destructed, for debugging/security/some reason, or recycle its memory with another object as an optimisation, or whatever. The implementation can do as it pleases. Calling the destructor again is essentially calling a method on arbitrary raw memory - a Bad Idea (tm).

AshleysBrain
A: 

It is undefined because if it weren't, every implementation would have to bookmark via some metadata whether an object is still alive or not. You would have to pay that cost for every single object which goes against basic C++ design rules.

FredOverflow
+1  A: 

Since what you're really asking for is a plausible implementation in which your code would fail, suppose that your implementation provides a helpful debugging mode, in which it tracks all memory allocations and all calls to constructors and destructors. So after the explicit destructor call, it sets a flag to say that the object has been destructed. delete checks this flag and halts the program when it detects the evidence of a bug in your code.

To make your code "work" as you intended, this debugging implementation would have to special-case your do-nothing destructor, and skip setting that flag. That is, it would have to assume that you're deliberately destroying twice because (you think) the destructor does nothing, as opposed to assuming that you're accidentally destroying twice, but failed to spot the bug because the destructor happens to do nothing. Either you're careless or you're a rebel, and there's more mileage in debug implementations helping out people who are careless than there is in pandering to rebels ;-)

Steve Jessop
+1  A: 

One important example of an implementation which could break:

A conforming C++ implementation can support Garbage Collection. This has been a longstanding design goal. A GC may assume that an object can be GC'ed immediately when its dtor is run. Thus each dtor call will update its internal GC bookkeeping. The second time the dtor is called for the same pointer, the GC data structures might very well become corrupted.

MSalters