While I understand that sealed can be used for security reasons, a few people use the sealed keyword on leaf nodes as an optimization technique.
How does this help optimization? Why isn't the compiler smart enough to figure this out itself?
While I understand that sealed can be used for security reasons, a few people use the sealed keyword on leaf nodes as an optimization technique.
How does this help optimization? Why isn't the compiler smart enough to figure this out itself?
No, it does not really help in terms of optimization. Not what I can see from profiling anyways.
In a sealed class, calls to virtual methods can bypass the usual virtual method lookup and go directly to the most-derived virtual method implementation instead. In principle, the compiler/JIT could also inline these calls.
The compiler can't figure it out for non-sealed classes, because any code could come along after compilation and inherit from your class: the compiler must assume the worst case.
Suppose you have a virtual method which is overridden in a leaf class. This certainly won't be overridden any further, so the JIT compiler could potentially inline calls to that method for targets which are known to be of that leaf class. I don't know whether the JIT actually performs this optimization, mind you.
Note that in Java, the HotSpot JVM can perform this optimization even for non-final classes, as it's a multi-pass JIT: it can optimistically assume that nothing's going to override a virtual method, and then undo its optimisations if a class is ever loaded that does override it. Of course, with methods being virtual by default in Java, this is a bigger deal than it would be in C#. (Even if defaults shouldn't matter, they clearly do.)
Personally I don't use sealed
particularly for optimization or security reasons: I use it because designing for inheritance (properly) is hard. I agree with the concept of "design for inheritance or prohibit it" and have generally found that the occasional pain of not being able to derive from a class is more than compensated for by the freedom from worrying about inheritance. YMMV.
It is a bit of a false optimisation; I'd rather use it to make sure that if I'm not expecting inheritance I don't get inheritance. The compiler still emits all instance calls (to classes) as virtual calls, but potentially the JIT could optimize it differently, and just do a null check and static call, if not overridden. Possibly.
There are some more esoteric cases, for example when the presence of a particular interface would cause it to be treated differently at runtime - but to exploit those scenarios (depending on sealed
) requires runtime code generation via ILGenerator
etc.