Well, they are incredibly useful, in certain situations.
In the .NET CLR, for example:
- are not guaranteed to run
The finalizer will always, eventually, run, if the program isn't killed. It's just not deterministic as to when it will run.
- if they do run, they may run an arbitrary amount of time after the object in question becomes a candidate for finalization
This is true, however, they still run.
In .NET, this is very, very useful. It's quite common in .NET to wrap native, non-.NET resources into a .NET class. By implementing a finalizer, you can guarantee that the native resources are cleaned up correctly. Without this, the user would be forced to call a method to perform the cleanup, which dramatically reduces the effectiveness of the garbage collector.
It's not always easy to know exactly when to release your (native) resources- by implementing a finalizer, you can guarantee that they will get cleaned up correctly, even if your class is used in a less-than-perfect manner.
- and (at least in java), they incur an amazingly huge performance hit to even stick on a class
Again, the .NET CLR's GC has an advantage here. If you implement the proper interface (IDisposable
), AND if the developer implements it correctly, you can prevent the expensive portion of finalization from occuring. The way this is done is that the user-defined method to do the cleanup can call GC.SuppressFinalize, which bypasses the finalizer.
This gives you the best of both worlds - you can implement a finalizer, and IDisposable. If your user disposes of your object correctly, the finalizer has no impact. If they don't, the finalizer (eventually) runs and cleans up your unmanaged resources, but you run into a (small) performance loss as it runs.