I know I might get a lot of down-votes for this reply, but if you are already trying to avoid dynamic memory in the first place, because you said it's a no-no, why do you use GC at all? I'd never use GC in a real-time system where predictable runtime speed is the major concern. I'd avoid dynamic memory wherever possible, thus there are very, very little dynamic objects to start with and then I'd handle the very few dynamic allocations I have manually, so I have 100% control when something is released and where it is released. After all not just GC is not deterministic, free() is as little deterministic as malloc() is. Nobody says that a free() call just has to mark the memory as free. It might as well try to combine smaller free memory blocks surrounding the free'd one to a big one and this behavior is not deterministic, nor is the runtime for it (sometimes free won't do that and malloc will do that instead on next allocation, but nowhere is written that free mustn't do that).
In a critical realtime system, you might even replace the system standard malloc()/free() with a different implementation, maybe even writing your own one (it's not as hard as it sounds! I've done that before just for the fun of it) that works most deterministic. For me GC is a plain convenience thingy, it is to get programmers away from focusing on sophisticated malloc()/free() planing and instead having the system deal with this automatically. It helps doing rapid software development and saves hours of debugging working finding and fixing memory leaks. But just like I'd never use GC within an operating system kernel, I'd never use it within a critical realtime application either.
If I need a more sophisticated memory handling, I'd maybe write my own malloc()/free() that works as desired (and most deterministic) and write my own reference counting model on top of it. Reference counting is still manual memory management, but much more comfortable than just using malloc()/free(). It is not ultra fast, but deterministic (at least increasing/decreasing the ref counter is deterministic in speed) and unless you may have circular references, it will catch all dead memory if you follow a retain/release strategy throughout your application. The only non deterministic part about is that you won't know if calling release will just decrease the ref counter or really free the object (depending if the ref count goes to zero or not), but you could delay the actual free by offering a function to say "releaseWithoutFreeing", which decreases the ref counter by one, but even if it reaches zero, it won't free() the object yet. Your malloc()/free() implementation can have a function "findDeadObjects" that searches for all objects with a retain counter of zero, that have not yet been released and free them (at a later point, when you are in a less critical part of your code that has more time for such kind of tasks). Since this is also not deterministic, you could limit the amount of time it may use for this like "findDeadObjectsForUpTo(ms)", and ms is the amount of milliseconds it may use for finding and freeing them, coming back as soon as this time quantum has been used, so you won't spent too much time in this task.