It's more a theoretical question than a practical one. I know that GC are currently working with big process that use 1,2 or 3 Go of memory but I'd like to know if it is theoretically possible to have an efficient GC with really huge memory (1000 Go or more).
I ask this question because, even if the GC could run its algorithm progressively, it needs to wait to have scanned all objects before freeing an object to be sure no other object use it. So, in a very big system, memory should logically be freed less frequently. If memory is very big, unused object would be freed so rarely that the GC wouldn't have any more interest.
Do you know studies or articles on this subject ?