I'm writing a lightweight game engine and while doing some research for it I've come across a number of compelling articles advocating the implementation of Game Objects through a "collection of components" model rather than an "inheiritance from concrete classes" model. There are lots of advantages:
- objects can be composed using data driven design techniques, allowing designers to come up with new objects without involving a programmer;
- there tend to be fewer source file dependencies, allowing code to be compiled faster;
- the engine as a whole becomes more general;
- unforseen consequences of having to change concrete classes high up the inheiritance hierarchy can be avoided;
- and so on.
But there are parts of the system that remain opaque. Primarily among these is how components of the same object communicate with each other. For example, let's say an object that models a bullet in game is implemented in terms of these components:
- a bit of geometry for visual representation
- a position in the world
- a volume used for collision with other objects
- other things
At render time the geometry has to know its position in the world in order to display correctly, but how does it find that position among all its sibling components in the object? And at update time, how does the collision volume find the object's position in the world in order to test for its intersection with other objects?
I guess my question can be boiled down to this: Okay, we have objects that are composed of a number of components that each implement a bit of functionality. What is the best way for this to work at runtime?