views:

846

answers:

10

Reading through an old C++ Journal I had, I noticed something.

One of the articles asserted that

Foo *f = new Foo();

was nearly unacceptable professional C++ code by and large, and an automatic memory management solution was appropriate.

Is this so?

edit: rephrased: is direct memory management unacceptable for new C++ code, in general? Should auto_ptr(or the other management wrappers) be used for most new code?

A: 

First of all, i believe it should be Foo *f = new Foo();

And the reason I don't like using that syntax because it is easy to forget to add a delete at the end of the code and leave your memory a-leakin'.

zipcodeman
What other syntax would you use to accomplish the same thing?
Joe Philllips
Not exactly the same thing, but I usually use `Foo f = Foo();`
zipcodeman
Ooops. Forgot the parens. That's not what I was referring to. :)
Paul Nathan
Well, then I guess I misunderstood what you were asking.
zipcodeman
I was asking - "direct memory management or smart pointer for new, professional-quality C++ code"
Paul Nathan
`new Foo` is correct -- there is no need for parens if you are invoking the default constructor. (But yes, I would avoid the new/delete.)
Jon Reid
+6  A: 

With some kind of smart pointer scheme you can get automatic memory management, reference counting, etc., with only a small amount of overhead. You pay for that (in memory or performance), but it may be worth it to pay for it instead of having to worry about it all the time.

John at CashCommons
What's the deal with assuming that "automatic memory management" can *only* refer to reference counting?
jalf
@jalf: In C++ smart pointers, "automatic memory management" usually means reference counting. Anything else would require infrastructure.
David Thornley
Not true. `std::vector<int> vec;` is automatic memory management. All the memory it allocates is cleaned up *automatically*. Think of the `auto` keyword (in its current, pre-C++0x meaning). Why do you think it is called *auto* of all things? Because it *automatically* manages the object's lifetime. All local variables and class members are automatically managed *to begin with*. Automatic memory management is the default. It's only when we jump through hoops and call `new` that we lose it, and have to explicitly create the infrastructure to get it back.
jalf
By the way, how can you do reference counting without infrastructure? As far as I'm aware, it requires significantly more infrastructure than simply letting scope determine lifetime.
jalf
jaif: I didn't mean to imply that "automatic memory management" can only refer to reference counting.
John at CashCommons
+5  A: 

No.

There are very good reasons to not use automatic memory management systems in certain cases. These can be performance, complexity of data structures due to cyclical referencing etc.

However I recommend only using a raw poiner with new/malloc if ou have a good reason to not use somehting smarter. Seeing unprotected allocations scares me and makes me hope the coder knows what they're doing.

Some kind of smart pointer class like boost::shared_ptr, boost::scoped_ptr would be a good start. ( These will be part of the C++0x standard so dont be scared of them ;) )

Michael Anderson
Are you assuming that "automatic memory management" only includes reference counting? How about the very simplest one: "Deallocate in the destructor when the object goes out of scope"? That's effectively what `scoped_ptr` does already. How is that not "automatic memory management"?
jalf
"That's effectively what scoped_ptr does already"That's effectively what normal variables do already.
tstenner
I'm saying allocations of objects on the heap without some explicit control over the lifetime of that object is error prone and usually a bad idea.I'm not assuming anything in particular about the type of automatic memory management used, just saying that smart pointers are an easy place to start. A bigger list would include reference counting schemes, RAII schemes, object pools etc..
Michael Anderson
+17  A: 

This example is very Java like.
In C++ we only use dynamic memory management if it is required.
A better alternative is just to declare a local variable.

{
    Foo    f;

    // use f

} // f goes out of scope and is immediately destroyed here.

If you must use dynamic memory then use a smart pointer.

{
    std::auto_ptr<Foo>    f(new Foo);  // the smart pointer f owns the pointer.
                                       // At some point f may give up ownership to another
                                       // object. If not then f will automatically delete
                                       // the pointer when it goes out of scope..

}

There are a whole bunch os smart pointers provided int std:: and boost:: (now some are in std::tr1) pick the appropriate one and use it to manage the lifespan of your object.

See http://stackoverflow.com/questions/94227/smart-pointers-or-who-owns-you-baby

Technically you can use new/delete to do memory management.
But in real C++ code it is almost never done. There is nearly always a better alternative to doing memory management by hand.

A simple example is the std::vector. Under the covers it uses new and delete. But you would never be able to tell from the outside. This is completely transparent to the user of the class. All that the user knows is that the vector will take ownership of the object and it will be destroyed when the vector is destroyed.

Martin York
+1  A: 

In general, no, but the general case is not the common case. Which is why automatic schemes like RAII were invented in the first place.

From an answer I wrote to another question:

The job of a programmer is to express things elegantly in his language of choice.

C++ has very nice semantics for construction and destruction of objects on the stack. If a resource can be allocated for the duration of a scope block, then a good programmer will probably take that path of least resistance. The object's lifetime is delimited by braces which are probably already there anyway.

If there's no good way to put the object directly on the stack, maybe it can be put inside another object as a member. Now its lifetime is a little longer, but C++ still doe a lot automatically. The object's lifetime is delimited by a parent object — the problem has been delegated.

There might not be one parent, though. The next best thing is a sequence of adoptive parents. This is what auto_ptr is for. Still pretty good, because the programmer should know what particular parent is the owner. The object's lifetime is delimited by the lifetime of its sequence of owners. One step down the chain in determinism and per se elegance is shared_ptr: lifetime delimited by the union of a pool of owners.

> But maybe this resource isn't concurrent with any other object, set of objects, or control flow in the system. It's created upon some event happening and destroyed upon another event. Although there are a lot of tools for delimiting lifetimes by delegations and other lifetimes, they aren't sufficient for computing any arbitrary function. So the programmer might decide to write a function of several variables to determine whether an object is coming into existence or disappearing, and call new and delete.

Finally, writing functions can be hard. Maybe the rules governing the object would take too much time and memory to actually compute! And it might just be really hard to express them elegantly, getting back to my original point. So for that we have garbage collection: the object lifetime is delimited by when you want it and when you don't.

Potatoswatter
Could you add a link to the other question?
jalf
http://stackoverflow.com/questions/1960369/is-shared-ownership-of-objects-a-sign-of-bad-design/1960475#1960475
Potatoswatter
+2  A: 

I stopped writing such code some time ago. There are several alternatives:

Scope based deletion

{
    Foo foo;
    // done with foo, release
}

scoped_ptr for scope based dynamical allocation

{
    scoped_ptr<Foo> foo( new Foo() );
    // done with foo, release
}

shared_ptr for things that should be handled in many places

shared_ptr<Foo> foo;
{ 
    foo.reset( new Foo() );
} 
// still alive
shared_ptr<Foo> bar = foo; // pointer copy
...
foo.reset(); // Foo still lives via bar
bar.reset(); // released

Facory-based resource management

Foo* foo = fooFactory.build();
...
fooFactory.release( foo ); // or it will be 
                           // automatically released 
                           // on factory destruction
Kornel Kisielewicz
I believe `Foo foo();` won't compile because C++ thought it's a function prototype. Use `Foo foo;` instead.
KennyTM
Your 3rd example won't compile.
sbi
@sbi, corrected
Kornel Kisielewicz
+5  A: 

I think, the problem of all these "...best practices..." questions is that they all consider the code without context. If you ask "in general", I have to admit that direct memory management is perfectly acceptable. It is syntactically legal and it does not violate any language semantics.

As for the alternatives (stack variables, smart pointers etc), they all have their drawbacks. And none of them have the flexibility, the direct memory management have. The price you have to pay for such a flexibility is your debugging time, and you should be aware of all risks.

SadSido
+1, you had me after the 1st sentence.
sellibitze
Yes, each of the alternatives have individual drawbacks. But taken together, do they not have as much flexibility as "direct" memory management? Is there anything you can do with direct new/delete calls, that can't be achieved by *any* of the alternatives?
jalf
to jalf: sometimes you want object's destructor to be called *exactly* at the specific place. You cannot achieve it with any kind of shared pointers, because they postpone object destruction until no one points to the object. Weak pointers provide overhead, that may be unacceptable. Scoped pointers can do nothing if you want to extend object's lifetime out of scope.
SadSido
+5  A: 

It depends on exactly what we mean.

  • Should new never be used to allocate memory? Of course it should, we have no other option. new is the way to dynamically allocate objects in C++. When we need to dynamically allocate an object of type T, we do new T(...).
  • Should new be called by default when we want to instantiate a new object? NO. In java or C#, new is used to create new objects, so you use it everywhere. in C++, it is only used for heap allocations. Almost all objects should be stack-allocated (or created in-place as class members) so that the language's scoping rules help us manage their lifetimes. new isn't often necessary. Usually, when we want to allocate new objects on the heap, you do it as part of a larger collection, in which case you should just push the object onto your STL container, and let it worry about allocating and deallocating memory. If you just need a single object, it can typically be created as a class member or a local variable, without using new.
  • Should new be present in your business logic code? Rarely, if ever. As mentioned above, it can and should be typically be hidden away inside wrapper classes. std::vector for example dynamically allocates the memory it needs. So the user of the vector doesn't have to care. I just create a vector on the stack, and it takes care of the heap allocations for me. When a vector or other container class isn't suitable, we may want to write our own RAII wrapper, which allocates some memory in the constructor with new, and releases it in the destructor. And that wrapper can then be stack-allocated, so the user of the class never has to call new.

One of the articles asserted that Foo *f = new Foo(); was nearly unacceptable professional C++ code by and large, and an automatic memory management solution was appropriate.

If they mean what I think they mean, then they are right. As I said above, new should usually be hidden away in wrapper classes, where automatic memory management (in the shape of scoped lifetime and objects having their destructors called when they go out of scope) can take care of it for you. The article doesn't say "never allocate anything on the heap" or never use new", but simply "When you do use new, don't just store a pointer to the allocated memory. Place it inside some kind of class that can take care of releasing it when it goes out of scope.

Rather than Foo *f = new Foo();, you should use one of these:

Scoped_Foo f; // just create a wrapper which *internally* allocates what it needs on the heap and frees it when it goes out of scope
shared_ptr<Foo> f = new Foo(); // if you *do* need to dynamically allocate an object, place the resulting pointer inside a smart pointer of some sort. Depending on circumstances, scoped_ptr, or auto_ptr may be preferable. Or in C++0x, unique_ptr
std::vector<Foo> v; v.push_back(Foo()); // place the object in a vector or another container, and let that worry about memory allocations.
jalf
+4  A: 

If you are using exceptions that kind of code is practically guaranteed to lead to recource leaks. Even if you disable exceptions, cleaning up is very easy to srew up when manually pairing new with delete.

Nemanja Trifunovic
A: 

In general your example is not exception safe and therefore shouldn't be used. If the line directly following the new throws? The stack unwinds and you have just leaked memory. A smart pointer will take care of it for you as part of the stack unwind. If you tend to not handle exceptions then there is no draw back outside of RAII issues.

stonemetal