views:

587

answers:

7

In day to day programs I wouldn't even bother thinking about the possible performance hit for coding against interfaces rather than implementations. The advantages largely outweigh the cost. So please no generic advice on good OOP.

Nevertheless in this post, the designer of the XNA (game) platform gives as his main argument to not have designed his framework's core classes against an interface that it would imply a performance hit. Seeing it is in the context of a game development where every fps possibly counts, I think it is a valid question to ask yourself.

Does anybody have any stats on that? I don't see a good way to test/measure this as don't know what implications I should bear in mind with such a game (graphics) object.

A: 

In my personal opinion, all the really heavy lifting when it comes to graphics is passed on to the GPU anwyay. These frees up your CPU to do other things like program flow and logic. I am not sure if there is a performance hit when programming to an interface but thinking about the nature of games, they are not something that needs to be extendable. Maybe certain classes but on the whole I wouldn't think that a game needs to programmed with extensibility in mind. So go ahead, code the implementation.

uriDium
Not as much extensibility (although I could make a case for it) but testability is my main beef. Please refer to http://stackoverflow.com/questions/804904/xna-mock-the-game-object/828482#828482
borisCallens
+6  A: 

Coding to an interface is always going to be easier, simply because interfaces, if done right, are much simpler. Its palpably easier to write a correct program using an interface.

And as the old maxim goes, its easier to make a correct program run fast than to make a fast program run correctly.

So program to the interface, get everything working and then do some profiling to help you meet whatever performance requirements you may have.

Visage
But I didn't get to design the XNA framework, so there is no interface available..
borisCallens
So in that specific case you have no choice, so any comparison is moot.
Visage
+1 for the maxim and the solid advice
annakata
I'm not asking which is best, I'm asking if an interface does imply a perf hit. And if so, what magnitude are we talking about?
borisCallens
The only way you'll get a definitive answer is to code up two ways of doing it and profile them.
Visage
+1  A: 

I think object lifetime and the number of instances you're creating will provide a coarse-grain answer.

If you're talking about something which will have thousands of instances, with short lifetimes, I would guess that's probably better done with a struct rather than a class, let alone a class implementing an interface.

For something more component-like, with low numbers of instances and moderate-to-long lifetime, I can't imagine it's going to make much difference.

expedient
+2  A: 

First I'd say that the common conception is that programmers time is usually more important, and working against implementation will probably force much more work when the implementation changes.

Second with proper compiler/Jit I would assume that working with interface takes a ridiculously small amount of extra time compared to working against the implementation itself. Moreover, techniques like templates can remove the interface code from running.

Third to quote Knuth : "We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil."
So I'd suggest coding well first, and only if you are sure that there is a problem with the Interface, only then I would consider changing.

Also I would assume that if this performance hit was true, most games wouldn't have used an OOP approach with C++, but this is not the case, this Article elaborates a bit about it.

It's hard to talk about tests in a general form, naturally a bad program may spend a lot of time on bad interfaces, but I doubt if this is true for all programs, so you really should look at each particular program.

Liran Orevi
For example Java's JIT will know when an interface has only one implementation loaded, so it can optimize the method calls to be non-virtual. Then if another implementation is loaded at runtime, the JIT will deoptimize the previously optimzied code and recompile it.
Esko Luontola
+3  A: 

What Things Cost in Managed Code

"There does not appear to be a significant difference in the raw cost of a static call, instance call, virtual call, or interface call."

It depends on how much of your code gets inlined or not at compile time, which can increase performance ~5x.

It also takes longer to code to interfaces, because you have to code the contract(interface) and then the concrete implementation.

But doing things the right way always takes longer.

Chad Grant
Actually I believe that doing things the right way takes less time in the long run :).
Liran Orevi
Good for you, but my comment was in context of building it the first time, not maintenance
Chad Grant
You're absolutely right, I agree.
Liran Orevi
A: 

it would imply a performance hit

The designer should be able to prove his opinion.

Pavel Feldman
+1  A: 

Interfaces generally imply a few hits to performance (this however may change depending on the language/runtime used):

  1. Interface methods are usually implemented via a virtual call by the compiler. As another user points out, these can not be inlined by the compiler so you lose that potential gain. Additionally, they add a few instructions (jumps and memory access) at a minimum to get the proper PC in the code segment.
  2. Interfaces, in a number of languages, also imply a graph and require a DAG (directed acyclic graph) to properly manage memory. In various languages/runtimes you can actually get a memory 'leak' in the managed environment by having a cyclic graph. This imposes great stress (obviously) on the garbage collector/memory in the system. Watch out for cyclic graphs!
  3. Some languages use a COM style interface as their underlying interface, automatically calling AddRef/Release whenever the interface is assigned to a local, or passed by value to a function (used for life cycle management). These AddRef/Release calls can add up and be quite costly. Some languages have accounted for this and may allow you to pass an interface as 'const' which will not generate the AddRef/Release pair automatically cutting down on these calls.

Here is a small example of a cyclic graph where 2 interfaces reference each other and neither will automatically be collected as their refcounts will always be greater than 1.

interface Parent {
  Child c;
}

interface Child {
  Parent p;
}

function createGraph() {
  ...
  Parent p = ParentFactory::CreateParent();
  Child c = ChildFactory::CreateChild();

  p.c = c;
  c.p = p;      
  ...  // do stuff here

  // p has a reference to c and c has a reference to p.  
  // When the function goes out of scope and attempts to clean up the locals
  // it will note that p has a refcount of 1 and c has a refcount of 1 so neither 
  // can be cleaned up (of course, this is depending on the language/runtime and
  // if DAGS are allowed for interfaces).  If you were to set c.p = null or
  // p.c = null then the 2 interfaces will be released when the scope is cleaned up.
}
Adam Markowitz
although I won't pretend to understand it all, I see an educated answer. Time for some reasearch
borisCallens
Can you give an example of how to create a cyclic graph? Pseudo code perhaps?
borisCallens
Doesn't the GC check if the ref to Parent/Child is to an object that is also marked for Garbage Check?But I could see how such a check would have a hard limit on deepness for performance's sake.I don't have hard facts on that but I would believe that the .net GC would have saves for such a thing.
borisCallens
This post is not specific to a language that implements a GC. Also, in heterogeneous systems such as .NET, you are allowed to cross managed/unmanaged boundaries which could complicate the graph nodes ref counting system.
Adam Markowitz