The issue you're running into has to do with storage allocation. When arrays are allocated, they need to contain storage for all of their elements. Let me give a (highly simplified) example. Say you have classes set up like this:
class Base
{
public:
int A;
int B;
}
class ChildOne : Base
{
public:
int C;
}
class ChildTwo : Base
{
public:
double C;
}
When you allocate a Base[10]
, each element in the array will need (on a typical 32-bit system*) 8 bytes of storage: enough to hold two 4-byte ints. However, a ChildOne
class needs the 8 bytes of storage of its parent, plus an additional 4 bytes for its member C
. A ChildTwo
class needs the 8 bytes of its parent, plus an additional 8 bytes for its double C
. If you try to push either of these two child classes into an array that was allocated for an 8-byte Base
, you'll wind up overflowing your storage.
The reason that arrays of pointers work is that they're constant size (4 bytes each on a 32-bit system), regardless of what they point at. A pointer to a Base
is the same as a pointer to a ChildTwo
, despite the fact that the latter class is twice the size.
The dynamic_cast
operator allows you to perform type-safe downcasting to change the Base*
to a ChildTwo*
, so it will solve your problem in this particular case.
Alternatively, you can decouple the processing logic from the data storage (the Strategy Pattern), by creating a class layout something like this:
class Data
{
public:
int A;
int B;
Data(HandlerBase* myHandler);
int DoSomething() { return myHandler->DoSomething(this) }
protected:
HandlerBase* myHandler;
}
class HandlerBase
{
public:
virtual int DoSomething(Data* obj) = 0;
}
class ChildHandler : HandlerBase
{
public:
virtual int DoSomething(Data* obj) { return obj->A; }
}
This pattern would be appropriate in cases where the algorithmic logic of DoSomething
may require significant setup or initialization that is common to a large number of objects (and could be handled in the ChildHandler
construction), but not universal (and therefore not appropriate for a static member). The data objects then maintain consistent storage and point to the handler process that will be used to perform their operations, passing themselves as a parameter when they need to invoke something. Data
objects of this sort have a consistent, predictable size and can be grouped into arrays to preserve referential locality, but still have all of the flexibility of the usual inheritance mechanism.
Note that you're still building what amounts to an array of pointers, however -- they're just nestled another layer deep below the actual array structure.
* For nitpickers: yes, I realize the numbers I gave for storage allocation ignore class headers, vtable information, padding, and a large number of other potential compiler considerations. This wasn't meant to be exhaustive.
Edit Part II: All of the following material is incorrect. I posted it off the top of my head without testing it, and confused the ability to reinterpret_cast two unrelated pointers with the ability to cast two unrelated classes. Mea culpa, and thanks to Charles Bailey for pointing out my gaffe.
The general effect is still possible -- you can forcibly grab an object out of the array and use it as another class -- but it requires taking the object address and forcing a pointer cast to the new object type, which defeats the theoretical purpose of avoiding a pointer dereference. Either way, my original point -- that this is a horrible "optimization" to be trying to make in the first place -- still holds.
Edit: Okay, I think with your latest edits I've figured out what you're trying to do. I'm going to give you a solution here, but please, for the love of all that is holy, swear to me that you will never use this in production code. This is an engineering curiosity, not a good practice.
You seem to be trying to avoid making a pointer dereference (possibly as a performance micro-optimization?) but still want the flexibility of invoking submethods on objects. If you know for certain that your base and derived classes are identical sizes -- and the only way you are going to know this is to examine the physical class layout generated by the compiler, because it can make all kinds of adjustments as it deems necessary, and the spec doesn't give you any guarantees -- then you can use reinterpret_cast to forcibly treat the parent as a child in the array.
class Base
{
public:
int A;
int B;
void DoSomething();
}
class Derived : Base
{
void DoSomething();
}
void DangerousGames()
{
// create an array of ten default-constructed Base on the stack
Base items[10];
// force the compiler to treat the bits of items[5] as a Derived,
// and make a ref
Derived& childItem = reinterpret_cast<Derived>(items[5]);
// invoke Derived::DoSomething() using the data bits of items[5],
// since it has an identical layout
childItem.DoSomething();
}
This will save you a pointer dereference, and has no performance penalty, because reinterpret_cast is not a runtime cast, it's essentially a compiler override that says, "no matter what you think you know, I know what I'm doing, shut up and do it." The "slight downside" is that it makes your code ultra-fragile, because any change to the layout of Base
or Derived
, whether initiated by you or the compiler, will cause the whole thing to come crashing down in flames, with what are likely to be extremely subtle and almost impossible to debug undefined behaviors. Again, never use this in production code. Even in the most performance-critical realtime systems, the cost of a pointer dereference is always worth it compared to building what amounts to a hair-trigger nuclear bomb in the middle of your codebase.