views:

943

answers:

23
A: 

You just need to be wise, that's right ;) However, if your design isn't right it could be easy to oversee something.

With Garbage Collection however you don't have to care about memory and can focus more on the rest of your program, thus possibly develop "faster"

Tapdingo
+16  A: 

To be more productive. In other words, the programmer can focus on writing the bits that is unique for his particular problem.

Martin Wickman
+1 Code reuse is good.
Matt Ellen
+3  A: 

I agree with mouviciel's comment. But garbage collectors do allow for quicker development because the developer no longer has to worry about memory leaks, allowing them to focus on other aspects of their program.

But do note, that if you are programming on a language that has garbage collection, it is very wise to be aware of that fact. Its almost a must (IMO) to understand how it works, and what it is doing in the background.

Meiscooldude
+2  A: 

When working in complex projects with multiple calls to libraries and external code that you did not write, it becomes very difficult to keep track of the objects that you need to free and the objects freed by external libs and other places in your code.

There are lots of tools that now exist which make the task of tracking down memory leaks easier, but they tend to be insidious bugs that only become noticeable after the system has been running for hours or days.

However, I do agree with your sentiment. If I have control over the code base, I prefer to write in something where I am in charge (like c). But if I have to work with external forces, something with a decent garbage collector is much more appealing.

Adam Shiemke
+8  A: 

Because we are not living in the early 80s anymore. It's a waste of a developers time and it's simply annoying to care about lowest level tasks when you are about to create an amazing application.

Turing Complete
I don't use a garbage collector in C++, nor do I waste time worrying about memory.
GMan
@GMan Yes, I know plenty of people who don't worry about memory too. It's their mess that the rest of us have to clean up.
@GMan Smart pointers, now ubiquitous in C++, are a form of garbage collection.
Andres Jaan Tack
Smart pointers are not only a form of garbage collection they're an inferior form of it in most cases, being slower and using more resources than proper modern garbage collection systems.
JUST MY correct OPINION
@Andres @user: `vector` requires no mess and is no worry. And if you want to call it garbage collection that's fine, but that's a pretty wide definition.
GMan
+4  A: 

Consider a case where a particular pointer is used by two separate sub systems. One sub system may be done with the variable and the programmer may think, "I'm done with this, I'll just go ahead and free it", completely unaware that another sub system still needs access to it. Or another pitfall, the developer thinks, "I'm not sure if there is another sub system which may need this" (even if there is not) leading to memory leaks. This kind of situation comes up a lot in complex systems.

Nick
Exactly what I was thinking. Exceptions and how they are handled also add complexity to this mix.
James Westgate
+2  A: 

So, if you write program in language C, you know wheather you need some piece of memory, so if don´t, you can simply destroy it.

That's the theory, at least. Problem is that it can complicate code greatly. For example, this:

for (x in ComputeBigList()) ...

becomes this

var xs = ComputeBigList();

try {
   for(x in xs) ...
} finally {
   FreeMemory(xs);
}

The lack of a garbage collector required us to name the result of ComputeBigList, store it in a variable and then add a delete statement wrapped in a finally, just to be sure it actually got deleted.

This is where C++ fans should be pointing out that C++'s guaranteed destructor calls can make this much easier. That said, you then have the overhead and additional code associated with reference counting, etc, assuming you want your objects to be able to escape the dynamic extent in which they were created. (ie: I allocate an object and then return it.)

The other thing GC does that's useful is control how you use your memory. A relocating GC lets you arrange objects so that they can be more efficiently accessed. GC in general gives your runtime a bit more flexibility about when you pay the price of reclaiming memory. (Explicit frees and refcount updates always have to be immediate.)

mschaef
+13  A: 

To avoid errors. No matter how careful you are about deallocating memory, either you will eventually make a mistake or you will eventually code a program that requires a complex memory reference pattern which will make the likelihood of error much greater.

Any possibility that exists for given enough time will become a reality, eventually you will leak memory with manual methods, unless extra effort is specifically put into monitoring memory consumption. This extra effort steals time from coding toward the primary purpose of the program, which probably isn't to manage memory.

In addition, even if your program doesn't leak memory, garbage collection often tends to handle memory more efficiently than many non-garbage collection methods. Most people don't new blocks of objects to avoid multiple new calls, nor would they revisit and clean up the cache of unused newed objects afterwards. Most manual garbage collection methods concentrate on freeing memory at block boundaries, which might let garbage linger a bit longer than it needed.

Each added benefit and feature you pile onto manual garbage collection takes you one step closer to automatic garbage collection. Using no utilities to collect garbage beyond the manual calls to free it will not scale easily. Either you will spend a lot of your time checking memory allocation / reclimation, or you will not spend enough to avoid a memory leak.

Either way, automatic garbage collection solves this problem for you, allowing you to get back to the main point of your program.

Edwin Buck
+1. Handling garbage in a (non-trivial) program (a) takes a long time to write when you do it manually, (b) is very hard to do correctly when you do it manually, and (c) there are good automatic solutions. Of all things in programming, garbage collection is one of the things that really should be automatic!
Thomas Padron-McCarthy
@edwin "Each added benefit and feature you pile onto manual garbage collection takes you one step closer to automatic garbage collection."That's really the key insight here... it's not a binary decision to use GC or not: there are a range of options. (This is particularly true when you broaden the conversation to include resource management in general, and not just memory.)
mschaef
And yet garbage collected languages can still leak memory, and do little to prevent leaks of resources or to ensure other cleanups. There are alternatives (e.g. in C++) that are about as effective in overall-bug-avoidance, though they don't avoid the exact same set of bugs.
Steve314
@Steve "And yet garbage collected languages can still leak memory..." Actually, garbage collected languages never leak memory in the C / C++ sense, instead people redefine what a memory leak is and then apply the same term in new ways. In C / C++ you run into a situation where you cannot reach the leaked memory without randomly walking all memory (and if you could find it, odds are you wouldn't be able to cast it back into the right data structure). In Java, memory doesn't leak, instead poor programmers never stop using unneeded memory (and they call it a "memory leak").
Edwin Buck
@Edwin: there's a good reason your "poor programmers never stop needing using unneeded memory" is still called a memory leak. Ensuring that a never-to-be-used-again reference is nulled is much the same problem as ensuring that a never-to-be-used-again pointer is freed. The failure-cases and symptoms are different when you can't manage all the lifetimes reliably yourself, but that doesn't mean the basic issue is fundamentally different. If failing to manage lifetimes reliably is a "poor programmer" thing, then anyone who needs GC is a poor programmer.
Steve314
@Steve, dereferncing != freeing memory. The failure patterns are very different. If I accidentally free memory someone else is using, the program crashes. If I accidentally dereference memory someone else is using, the program continues. Reusing a term to mean something else is not desirable, just look at the Copyright Infringement == Theft misunderstanding. Theft requires that you illegally deprive someone of their property preventing them from using it, not that you might have deprived them of possible future profit as they continue to use it (selling it to you).
Edwin Buck
Is it a different meaning, or is it just that the bounds of the meaning aren't where you personally want them to be? AFAIK, there is no authority that decides the exact meaning of words. BTW - if you accidentally don't call a file close, your program may fail when it tries to open the same file again later. In C++, destructors usually ensure *timely* cleanup - "usually" because of course there's times when calling them is the programmers responsibility. As I said, the failure cases and symptoms are different - naming a problem for one doesn't prove it worse overall.
Steve314
+4  A: 

It is an anti-dumb-programmer mechanism. And trust me, when code becomes very complex, when thinking in terms of dynamically allocated memory, we are all equally dumb.

In my short experience as a programmer I've spent (cumulated) days trying to figure out why valgrind (or other similar tools) is reporting memory leaks, when everything was so "wisely coded".

Andrei Ciobanu
You almost got a reflexive -1 from me when I read that first sentences. Then I read the second. Bravo.
JUST MY correct OPINION
+7  A: 

Because we are not wise enough.

swegi
If we didn't have better things to do, we might have time to learn. +1
Andres Jaan Tack
A: 

You can do your own garbage collection, as you mentioned. Adding a garbage collector simply frees you from having to worry about it and from having to take the time to write and test your garbage collection code. If you are working on an existing codebase that contains memory leaks, it can be easier (and more effective) to use an automatic garbage collector than to try to learn all of the existing code in enough detail to find and fix the problems.

That being said, I'm not a fan of adding automatic garbage collection facilities to languages that don't have it built in. If the language was designed assuming that the developer would be thoughtful about memory allocation and de-allocation, then (IMHO) it does the developer a disservice to remove this responsibility from them. Not being able to control precisely when and how memory is freed can lead to inconsistent behavior. Thinking about and defining the full lifetime of dynamically-allocating memory is an important part of planning your code. No automated system is a true substitute for careful and accurate programming (that applies to far more than just garbage collectors).

bta
One of the nice things about the Boehm et al collector is that it can be used as a very detailed memory leak detector. It's a simple mode switch -- not even compiler switch these days -- to use it that way.Of course once you realize just how poorly you're managing object lifespans, it's far easier to flick the switch to "collector" and never worry about it again.
JUST MY correct OPINION
A: 

Without a garbage collector, anytime you allocate something dynamically, you have to keep track of when you no longer need it, and destroy it only after you no longer need it. This can be difficult, especially when/if a number of different parts of the program all have pointers to one object, and no one of them knows what other code might be using it.

That's the theory anyway. In reality, I have to admit that it hasn't worked out that way for me either. In particular, when (most) people are aware that their code will be using a garbage collector, they tend to dismiss memory management as not being a problem or even an issue to consider at all. As a result, they can jump in and start coding more quickly. For small problems that they understand quite well before starting, this can be a significant win -- but for larger problems it appears (at least to me) that the tendency is toward jumping in and starting to write code before they really understand the problem.

In my experience, lack of a garbage collector makes the developer(s) think a bit more about lifetime issues up-front. In the process, they're motivated to find simple solutions to object lifetime issues -- and they usually do exactly that. In the process, they've typically simplified the code in general (not just the memory management) to the point that it's much simpler, cleaner, and more understandable.

In a way, it reminds me a lot of the old parable of two programmers. At the end of a project, the managers who look at code that used garbage collection think it's a really good thing they used garbage collection. The problem is clearly even more complex than they realized, and given the complexity of the code and the lifetime issues, there's no way anybody could keep track of them by hand and produce code that was even close to leak-free.

At the end of doing the same thing without garbage collection, the reaction is rather the opposite. They realize that the problem (in general) is really a lot simpler than they had realized. Object lifetime issues aren't really nearly as complex as they'd expected, and producing leak-free code wasn't particularly challenging at all.

Jerry Coffin
+2  A: 

You do not need garbage collection if you do not produce garbage in the first place.

One way to avoid garbage is to not use dynamic memory allocation at all. Most embedded programs do not use dynamic memory allocation. Even when dynamic memory allocation is used (even in many PC programs) there is often no real reason to use it. (Just because dynamic memory allocation is possible does not mean it should be used everywhere.)

Another way to avoid garbage collection is to use language that does not separate reference from contents. In that case, actual memory leak is not even possible. (But of course it is still possible to use too much memory.) IMHO, high level languages should not mess with "pointers" (address variables) at all.

PauliL
+1 for being the first to mention this. Whether GC is good or bad, it's a solution to an unnatural problem that was created by the programmer: excessive use of dynamically allocated memory to the point that it becomes unmanageable. While your approach can't be directly applied to many programs, thinking along the lines of reducing and consolidating dynamically memory allocations tends to reduce bugs, improve performance and memory utilization, and simplify program logic (versus manual deallocation, not versus GC, of course).
R..
+7  A: 
JUST MY correct OPINION
+1 Absolutely right. Especially the part about "You are very inexperienced".I have found out that conservativism in methodology and tool selection (such as the "zomg managed stuff? who needs this modern stuff anyway, I do me my memory management myself, because I'm sooooo 1337roflolz" the op exhibited) is seldomly anything else but ignorance and narrowmindedness.However, not on my teams. :-D
Turing Complete
+2  A: 

Releasing memory that is not needed anymore is an ideal goal, but it is not possible to do it automatically in all generality. Even in the absence of external input (which may affect whether some piece of data will be needed or not), deciding, given the complete state of the memory and the complete code, whether some piece of memory will be needed is equivalent to the halting problem, which is impossible to solve for a computer.

Needless to say, the same problem also exceeds the capacities of the average programmer brain quite fast, as the application size grows. Perfectly correct memory management can be achieved, in practice, only in two situations:

  1. the problem is simple (e.g. short-lived command-line application) and the programmer disciplined enough;
  2. the programmer is Donald Knuth.

In all other cases, we have to use approximations. A garbage collector relies on the following approximation: it detects unreachable blocks of memory. It cannot tell whether a reachable block will be used or not, but an unreachable block will not be used (because using implies reaching). Another common approximation (used by many programmers who feel they are wise enough) is to simply assume that they thought of every block, and then pray for the best (a variant being: educate your users into believing that memory leaks are a feature, and that a reboot every now and then is normal).

Thomas Pornin
+3  A: 

These days, most people who use a garbage collector are doing so inside a managed environment (like the Java Virtual Machine or the .NET Common Language Runtime). These managed environments add an additional wrinkle: they constrain the ability to take pointers to things. In the CLR for example, there is a notion of a pointer (which you can use through the managed IntPtr or the unmanaged unsafe code block), but there are limited conditions where you're allowed to use them. In most cases, you have to "pin" the corresponding objects in memory so that the GC doesn't move them while you're working with their pointers.

Why does this matter? Because, as it turns out, a managed allocator that is allowed to update pointers and move objects around in memory can be much more efficient than a malloc-style allocator. You can do cool things like generational garbage collection, which makes heap allocations as fast as stack allocations, you can profile the memory behavior of your application much more easily, and, oh yeah, you can also easily detect unreferenced objects and free them automatically.

So it's not only a matter of increased programmer productivity (although if you ask anyone who works in a managed language, they'll attest to the increased productivity it gives them), it's also a matter of enabling entirely new programming technologies.

Finally, garbage collection becomes truly necessary when working with functional programming languages (or programming in functional styles). In fact, the very first garbage collector was invented by McCarthy in 1959 as part of the development of the Lisp language. The reason is twofold: first, functional programming encourages immutable data structures, which are easier to collect, and second, in pure functional programming there is no allocation function; memory always gets allocated as "stack" (function locals) and then moves to a "heap" if it is captured by a closure. (This is a gross oversimplification but serves to illustrate the point.)

So... if you're programming in an imperative style, and you're "wise enough" to do the Right Thing will all your memory allocations, you don't need garbage collection. But if you want to change your programming style to take advantage of the newest advances in programming technology, you'll probably be interested in using a garbage collector.

Daniel Pryden
Of course nobody is "wise enough" for anything but the most trivial of memory management scenarios. +1, though, for expanding beyond programmer productivity scenarios.
JUST MY correct OPINION
+1  A: 

When you don't have to do real time application (you can't be sure of when the garbage collector will do his job event if you force him) or when you don't mind to fully control your memory, you can develop the head free and almost be sure to not make a memory leak.

Ephemere
I personally don't think high level languages make development faster because I'm used to writing in assembler. (Unspoken: and I've never seriously used a high level language.)Sounds stupid, doesn't it? Yeah. Read what you typed again.
JUST MY correct OPINION
Ephemere
A: 

You might want to try watching any of these videos

http://channel9.msdn.com/Search/Default.aspx?Term=Patrick%20Dussud&Type=site

Conrad Frix
+1  A: 

Garbage collections can be more efficient.

To allocate memory, malloc needs to fiddle around to find a large enough contiguous span of memory. With a compacting garbage collector, allocating memory is bumping a pointer (or close to it)

In C++, you can safely and cleanly deal with memory in many situations without a garbage collector by using smart pointers and strictly adhering to conventions. But (1) this does not work in all situations, even with shared_ptr and weak_ptr, and (2) reference counting requires coordination across threads, which has a performance penalty.

Usability is more important concern, but garbage collection is, at times, faster than deterministically freeing memory.

_Can_ be; There are also all kinds of implementations of dynamic memory allocators. Still, a good point.
Marc Bollinger
A: 

You can need release Interop resources as soon as posible (locked file). Gc.Collect can ensure COM Objects are released (if not referenced).

If you do a PrintPreview, this takes 2 Gdi handles for each Page (image + metafile). These resources are´nt released by PrintDocument or PrintControler, are waiting to GC.

I tested on a interactive program to use Gc.Collect when user return to main menu. With this operation, the reported memory for Task Manager is about 50%.

I think this is´nt important, but code a Gc.Collect when you know that a lot of memory is not referenced is a simple option.

x77
+1  A: 

you know wheather you need some piece of memory, so if don´t, you can simply destroy it.

You could use a similar argument to justify just about any labour saving device. Why write mathematical expressions when you can just produce assembly language? Why use readable characters when you can use binary?

The reason is simple. I work with programmers who are some of the best in their field. I can say without fear of exaggeration that some of them have written the book on their field. And yet these people are programming in C++ and make mistakes with memory management. When they make these mistakes, they are particularly difficult to find and correct. Why have amazing people whose talents could be directed elsewhere waste their time doing something a machine could do better?

(And yes, there are good answers to this question. For example when every byte of memory in your counts and so you can't afford to have any garbage at any time. But that is not the case in general.)

A: 

memory management is implementation issue, that doesn't connected to the goal of program.

by goal of the program i mean things like business logic.

when you working on implementation issues - you throws your time and efforts on things that doesn't help you to finish the program.

Avram
Everything is an implementation issue. I assume part of the goal of a business program is to speed up business. If the user is spending all their time listening to the hard drive grind because the GC is running and forcing thousands of pages to get swapped back in to check for GC-able objects, that's not really speeding up business...
R..
@R you are making two things equal speedup program = speedup business, even it possible (and happening) that fast program(powerful computer) not making business run fast.
Avram
+1  A: 

So, why to use GC, when all you need to do is actually just be wise with memory allocation/deallocation?

The problem is that it gets exceptionally hard to be sufficiently wise. Fail in the wisdom stakes and you get a memory leak or a crash. Here's the quick potted guide to computer-applied wisdom in automated memory management.

If you've got a simple program (the zero'th level of complexity), you can just use stack-based allocation to handle everything. It's very easy to get memory allocation right this way, but it's also a very restricted model of computation (and you also run into problems with stack space). So you start using the heap; that's where the “fun” begins.

The first level of complexity is where you've got pointers whose lifetime is bound to a stack frame. Again, that's fairly simple to do and forms the basis for much C++ programming.

The second level of complexity is where you've got reference counting. This is the basis for a C++ smart pointers, and is quite good at handling just about everything up to a forest of directed acyclic graphs. You can achieve a lot with this, and it permits some models of computation that work rather nicely with functional programming and parallel programming too.

Beyond that is the third level, garbage collection. This can handle arbitrary graphs of memory structures, but with a cost of being more memory hungry (since you don't in general try to deallocate quite as soon). One of the main costs is that the amount of memory allocated tends to be larger, due to the fact that it's only after the point where you could have deleted the memory that it really becomes eligible for automated deletion, were you but smart enough to figure out the lifetimes.

Donal Fellows