views:

134

answers:

3

I've been thinking about whether it's possible to apply the DI pattern without incurring the cost of virtual method calls (which according to experiments I did can be up to 4 times slower than non-virtual calls). The first idea I had was to do dependency injection via generics:

sealed class ComponentA<TComponentB, TComponentC> : IComponentA 
    where TComponentB : IComponentB
    where TComponentC : IComponentC
{ ... }

Unfortunately the CLR still does method calls via the interfaces even when concrete implementations of TComponentB and TComponentC are specified as generic type parameters and all of the classes are declared as sealed. The only way to get the CLR to do non-virtual calls is by changing all of the classes to structs (which implement the interfaces). Using struct doesn't really make sense for DI though and makes the issue below even more unsolvable.

The second issue with the above solution is that it can't handle circular references. I can't think of any way, either by C# code, or by constructing expression trees, to handle circular references because that would entail infinitely recursing generic types. (.Net does support generic types referencing itself, but it doesn't seem to generalize to this case.) Since only structs can cause the CLR to bypass the interfaces, I don't think this problem is solvable at all because circular references between structs could cause a paradox.

There's only one other solution I can think of and it's guaranteed to work - emit all of the classes from scratch at runtime, maybe basing them on compiled classes as templates. Not really an ideal solution though.

Anyone have better ideas?

Edit: In regards to most of the comments, I guess I should say that this is filed under "pure intellectual curiosity" I debated whether to ask this because I do realize that I don't have any concrete case in which it's necessary. I was just thinking about it for fun and was wondering whether anyone else came across this before.

+2  A: 

While a callvirt instruction does take longer this is usually done because it provides a cheap null check for the CLR prior to making the call to the method. A callvirt shouldn't take significantly longer than a call instruction especially considering the null check.

Have you found that you could significantly improve the performance of your application by creating types (either structs or classes with static methods) that allow you to guarantee that the C# compiler will emit call instructions rather than callvirt instructions?

The reason I ask is that I am wondering if you are going to create an unmaintainable code base that is brittle and hard to use simply to solve a problem that may or may not exist.

Andrew Hare
Yeah, using structs does reduce the overhead to almost nothing. I guess I might have mis-interpreted the cause of the improvement.
jthg
Using structs also reduce memory pressure but they are limited in their ability to elegantly model a problem domain.
Andrew Hare
+4  A: 

Typical example of trying to completely over-engineer something in my opinion. Just don't compromise your design because you can save a few 10's of milliseconds - if it is even that.

Are you seriously suggesting that because of the callvirt instructions, your app ends up being so significantly slower that users (those people you write the app for) will notice any difference - at all? I doubt that very much.

Wim Hollebrandse
+2  A: 

This blog post explains why you can't optimize the virtual call.

Hans Passant
That's a nice explanation. Thanks.
jthg