There are quite a bit of different arguments given here. I want to start by making clear that you cannot really make a 1:1 comparison. Each has its pros and cons, and any code snippet will be more appropriate for one or the other system. That is as much to say, on the contrary, that you must know whether you have GC or not to write efficient code.
My argument is you must know your environment and code acordingly. That will make your code efficient. Moving from one paradigm to the other and coding the same style will make your code more inefficient than what the GC really helps/takes away.
Case:
A program makes thousands of heap memory allocations for short lived objects. That is, it allocates and deallocates many times, with different size of objects.
On a non-GC environment, for each allocation you would end up calling malloc, and that requires locating in the list of free memory fragments the most suitable one (according to the specific malloc implementation). The memory is used and then it is freed with free [or new/delete in C++...]. The cost of memory management is the cost of locating the fragments.
On a GC environment, with a movable GC as java or .net are, after each GC run all free memory is contiguous. The cost of acquiring memory for an object is cheap, really cheap (<10 cpu instructions in Java VM). At each GC run, only alive objects are located and moved to the beginning of the appropriate memory region (usually it is a different region than the original one). The cost of memory management is primarily the cost of moving all reachable (alive) objects. Now, the premise is that most objects are shortlived so at the end the cost may be smaller than that of a non-GC system. One million objects allocated and freed (forgotten) on a single GC run amount to no extra cost.
Conclusion: In GC languages you can create all local objects on the heap. They are cheap. On the other hand, in non-GC systems, a bunch of allocations, deallocations and new allocations is expensive. The memory is fragmented and the cost of malloc increases... In non-GC systems you should use the stack as much as possible, using the heap out of necessity.
That has another implication. People used to one of the two memory systems will tend to write inefficient programs in the other. They are used to some idioms that are probably bad on the other system.
A clear example is a non-managed programmer that is used to allocate an object and reuse (reset its internal pointers with new elements as required) is used to that way of thinking: allocation is expensive, reusing is cheap. Now, if the same exact code is moved to a generational GC environment (Java, .net - both are move-generational-GC), you get a funny effect. In Java generational GC the system will perform minor collections only on the younger generations, only processing older generations in full collections. But an object in the young generation could be referred to by objects in the older generation, so extra work has to be performed to keep track of this old-to-young references. In particular in Java 1.4.1 garbage collector the system will mark the memory card (sub-part of page) where the old object resides and it then includes all the marked cards for processing during the minor collection, effectively increasing the amount of work that the GC has to perform and possibly impacting performance.
The object was alive during 1, 2, 3... GC runs, and it was moved that many times around, finally is moved to the old generation where it will not be moved in each GC run but can just stand there... but alas, the programmer forces the object to become a younger. It is moved once again, and it will again be moved each time the GC runs up to the time where it becomes old again.
To make a sensible comparison, you would need to get to programmers that know the environment write different pieces of code that solve the same problem with the same algorithms with different mind sets about memory management. Then compare the results of both of them.