One of my colleague told me that implementing interfaces have an overhead. Is this true?
I am not concerned about micro optimizations, just want to know the deeper details this entails.
One of my colleague told me that implementing interfaces have an overhead. Is this true?
I am not concerned about micro optimizations, just want to know the deeper details this entails.
Overhead compared to what?
A call via an interface is more expensive than a call to a non-virtual method, yes. I haven't tested it personally, but I think it's similar in magnitude to a virtual call.
That said, most of the time, performance isn't usually a valid reason to not use an interface for something -- most of the time, the call volume isn't enough to matter.
Interfaces do incur overhead because of the extra indirection performed when calling the methods, or accessing the properties. Many systems for implementing polymorphism, including the implementation of interfaces, generally use a virtual method table that maps function calls based on runtime type.
It is theoretically possible for a compiler to optimize away virtual function calls to either a normal function call, or inlined code, provided that the compiler is able to prove the history of the object that the calls are being made on.
In the vast majority of cases the benefits of using virtual function calls far out weigh the drawbacks.
Yes, Interfaces incur overhead. In fact, every layer you add between your logic and the processor adds overhead. Obviously, you should write everything in assembly because that is the only thing that does not incur overhead. GUI's also incur overhead so don't use those.
I'm being facetious, but the point is you have to find your own balance between clear, understandable, maintainable code and performance. For 99.999% (repeating, of course) of applications, as long as you're being mindful to avoid unnecessarily repeating execution of any of your more expensive methods, you'll never get to the point where you need to make something harder to maintain just for the sake of having it run faster.
I am not concerned about micro optimizations, just want to know the deeper details this entails.
There is overhead, but it is a micro optimization level overhead. For example, an interface can make many calls in IL switch from call to callvirt, but that's incredibly minor.
Although an interface should NOT incur overhead, somehow it does. I don't know this directly, but second-hand, we work on cable boxes and they are so underpowered that we test different performance scenarios. (Instantiated class count is having a HUGE difference).
It should not because an interface has very little impact at runtime, it's not like a call goes "through" an interface, the interface is simply the way a compiler links two pieces of code--at runtime it shouldn't be more than a pointer dispatch table.
And yet, it does effect performance. I'm guessing it has to do with metadata/reflection, because if you didn't need the metadata, the ONLY time an interface would be used is when casting from a less specific interface to that interface, and then it would only need to be a tag to check and see if it's possible.
I'll follow this question because I'd love to know if anyone knows the technical reason why. You might want to expand it to Java because it'll be the exact same cause, and you're more likely to be able to get an answer with Java since everything is open source.
I wouldn't even bother thinking about the extra performance cost while writing software. It's only going to be noticeable in select circumstances. For example: making zillions of calls via an interface in a tight loop (and even then the cost of executing the method's body may well dwarf the interface call overhead).
In Code Complete, Steve McConnell advises against constant micro-optimisations. He says it's best to write a program using good practices (i.e. concentrate on maintainability, readability and so forth) and then, once you're finished, if performance is not good enough, profile it and take steps to fix the main bottlenecks.
I would class eschewing interfaces as a micro-optimisation. It is an implementation detail that can easily be stripped out afterwards if needed (assuming you're not shipping software any time soon).
There's no point in speculatively optimising all of your code just because it might be faster. If 80% of your execution time is spent executing 20% of the code, it is clearly folly to sacrifice loose coupling everywhere 'just because' it might shave off 10 microseconds here or there. So you save 10 microseconds, but your program won't get any faster if some other function is gobbling up the CPU.
Spend the time where it matters, where you know it matters.
Speaking from the point of view of Java, at least in recent versions of Hotspot, the answer is generally little or no overhead when it matters.
For example, supposing you have a method such as the following:
public void doSomethingWith(CharSequence cs) {
char ch = cs.charAt(0);
...
}
CharSequence is, of course, an interface. So you might expect that the method will have to do extra work, checking what object type, finding the method, and in doiing so possibly searching interfaces last, etc etc-- essentially all the scare stories you could imagine...
But in reality, the VM can be a lot cleverer than that. If it works out that in practice, you're always passing objects of a particular type, then not only can it skip the objec type check, but it can even inline the method. For example, if you call a method such as the above in a loop on a series of Strings, Hotspot can actually inline the call to charAt() so that getting the character literally becomes a couple of MOV instructions-- in other words, a method call on an interface can turn into not even having a method call at all. (P.S. This information is based on assembly output from the debug version of 1.6 update 12.)
couldn't resist and tested it and it looks like almost no overhead.
Participants are:
Interface IFoo defining a method
class Foo: IFoo implements IFoo
class Bar implements the same method as Foo, but no interface involved
so i defined
Foo realfoo = new Foo();
IFoo ifoo = new Foo();
Bar bar = new Bar();
and called the method, which does 20 string concatenations, 10,000,000 times on each variable.
realfoo: 723 Milliseconds
ifoo: 732 Milliseconds
bar: 728 Milliseconds
If the method does nothing, the actual calls stand out a bit more.
realfoo: 48 Milliseconds
ifoo: 62 Milliseconds
bar: 49 Milliseconds
Unfortunately for Java there is yet some optimization which can be done to improve interface performance. Yes there is "almost no overhead" to the invokevirtual and invokeinterface instructions compared to invokespecial but there is a Da Vinci project which targets a performance shortcoming in the very very common trivial use of interfaces: an object which implements only a single interface and is never overloaded.
See this Java bugparade request for enchancement for all the technical details you could wish for.
As always (and it seems you understand this), consult Amdahl's Law when quibbling about micro-optimizations like this. If you are making that many method calls and need the speed consider refactoring combined with thorough benchmarking.
Invoking through an interface is slightly costlier than other forms of virtual method invocation because of the extra layer of indirection in the vtable. In the majority of the cases this shouldn't matter so you shouldn't worry too much about performance and stick to good design.
Having said that recently, I refactored a few classes by introducing interfaces and making all the calls through the interface. I was so confident (or lazy) that this would have no impact that we released it without performance check. It turned out that this had 10% impact on the performance of the whole application (not just the calls). We had made number of changes and this was the last thing we suspected. Eventually when we switched back to concrete classes the original performance was restored.
This is a heavily optimized application and above may not be applicable in other cases.
Virtual Stub dispatch is different than Interface Dispatch. Vance Morrison, The CLR JIT lead describes that in details in this blog post. http://blogs.msdn.com/vancem/archive/2006/03/13/550529.aspx