views:

647

answers:

6

I'm writing a lightweight game engine and while doing some research for it I've come across a number of compelling articles advocating the implementation of Game Objects through a "collection of components" model rather than an "inheiritance from concrete classes" model. There are lots of advantages:

  • objects can be composed using data driven design techniques, allowing designers to come up with new objects without involving a programmer;
  • there tend to be fewer source file dependencies, allowing code to be compiled faster;
  • the engine as a whole becomes more general;
  • unforseen consequences of having to change concrete classes high up the inheiritance hierarchy can be avoided;
  • and so on.

But there are parts of the system that remain opaque. Primarily among these is how components of the same object communicate with each other. For example, let's say an object that models a bullet in game is implemented in terms of these components:

  • a bit of geometry for visual representation
  • a position in the world
  • a volume used for collision with other objects
  • other things

At render time the geometry has to know its position in the world in order to display correctly, but how does it find that position among all its sibling components in the object? And at update time, how does the collision volume find the object's position in the world in order to test for its intersection with other objects?

I guess my question can be boiled down to this: Okay, we have objects that are composed of a number of components that each implement a bit of functionality. What is the best way for this to work at runtime?

+1  A: 

Composable architectures usually rely on interfaces. A component then is implementation+data, enabling designers to re-use available implementations with different data. e.g. using the rocket code once with a rocket graphic and once with an arrow graphic. The flexibility comes from being able to "configure" such combinations outside of the actual run-time.

Within the run-time, the objects receive and provide the necessary information via the interfaces. For example, an object would receive an origin and a reference direction to position itself in the world. For actually drawing stuff I'd presume that a kind of graphical context would be passed around and the infrastructure takes care of aligning the default offset/projection appropriately for the current object.

David Schmitt
A: 

It sounds a little over-engineered; what do you gain by making location an abstract component of an object instead of a fundamental property?

But if you really want to do it that way, I guess you could set up a dependancy-graph where everything's explicitly connected. So the (eg) collision-volume has a location input that's hooked up to the position-component's output. Take a look at the internals of Maya to get an idea of how this can work.

But again, IMHO this looks a lot like overkill.

A: 

Could it be possible to give the objects a reference back to the Game object?

This would allow the finding of the world position by going back to the Game object then drilling down to the world position.

Guvante
This sounds like a nice easy way to go (see my example) - I wonder if there is another way though?
Iain
+1  A: 

Another great reason for pursuing this strategy is the ability to compose the behaviour of an object from behaviour components, allowing you to re-use behaviours across multiple game objects.

So, for example, you have a basic game object class with these properties: burnable, movable, alive. By default each holds a reference to null. If you want to make your object be burnable, set:

object.burnable = new Burnable(object);

Now, any time you want to burn an object, use:

if (object.burnable != null)
{
   object.burnable.burn();
}

And the burnable behaviour will modify the game object in whatever way you desire.

Iain
I'd make Burnable into a functor (implement operator() rather than a burn() method), so you could just say object.burnable(); (And then rename burnable to burn) I'd also make it so that object.burn by default holds a no-op functor, so your client doesn't need to check if it's null, first.
Matt Cruikshank
What if inside burnable you also wanted to hold some variables like how flammable it is, how long will it burn for, etc? Could they still go in a "burn" object?
Iain
+1  A: 

I've seen (and tried) several ways to implement this:

1) Components don't exist in a vacuum, but are collected in an "entity" (or "game object") object. All components have a link back to their entity, so your collision may do something like GetEntity()->GetComponent("Position")->GetCoords() (possibly checking for null vectors etc. - the details depend of the language you're working in).

In this case, it can sometimes be convenient to put some common information directly in the entity (the position, a unique ID, "active/inactive" status) - there's a tradeoff between making something "pure" and generic, and making something quick and efficient.

2) There is no entity, only components (I'm using this for my own lightweight game engine). In this case, components have to be explicitly linked to other components, so maybe your "collision" and "graphics" will keep a pointer to "position".

Emile
@emile: In your opinnion, What would be the best way. Pure Agregation? or a hybrid model. I would love to see a good "article" about this subject
Mr.Gando
+1  A: 

I've always found Kyle Wilson's blog to be an interesting source from someone who works with this and seems to give it a lot of thought. Especially this entry might be of interest: http://gamearchitect.net/2008/06/01/an-anatomy-of-despair-aggregation-over-inheritance/. It's not the key point of the article but basically what he says is that they (while developing 'Fracture') had separate hierarchies. One for GameObjects and a SceneGraph for the visual representation. Personally I think that's a very sound design but I'm not an expert in the field.

Andreas Magnusson