views:

607

answers:

5

What is the relationship between using virtual functions and C++ inheritance mechanisms versus using templates and something like boost concepts?

It seems like there is quite an overlap of what is possible. Namely, it appears to be possible to achieve polymorphic behavior with either approach. So, when does it make sense to favor one over the other?

The reason why I bring this up is because I have a templated container, where the containers themselves have a hierarchical relationship. I would like to write algorithms that use these containers without caring about which specific container it is. Also, some algorithms would benefit from knowing that the template type satisfied certain concepts (Comparable, for example).

So, on one hand, I want containers to behave polymorphicly. On the other, I still have to use concepts if I want to correctly implement some algorithms. What is a junior developer to do?

A: 

If a decision can be made at compile-time, use templates. Otherwise use inheritance and virtual functions.

anon
+1  A: 

Yes, polymorphic behavior is possible with both mechanisms. In fact, both are called polymorphism too.

Virtual functions give you dynamic polymorphism (because it's decided at runtime), while templates give you static polymorphism (everything is decided at compile-time).

And that should answer the question of which to prefer as well. Whenever possible, prefer to move work to compile-time. So when you can get away with it, use templates to solve your polymorphism needs. And when that's not possible (because you need to use runtime type information, because the exact types aren't known at compile-time), fall back to dynamic polymorphism.

(Of course there may be other reasons to prefer one or the other. In particular, templates require you to move a lot of code to header files which may or may not be a problem, and compilation speed tends to suffer, which also may or may not be a problem.)

jalf
A: 

In this specific case you can do something like

template<typename T>
class ContainerBase{};

template<typename T>
class ContainerDerived : public ContainerBase<T> {};

Since each 'Container' type is unique for each template type, there's no reason member functions of each container type couldn't be specialized on the templated type's traits.

+4  A: 

I think of concepts as a kind of meta-interface. They categorize types after their abilities. The next C++ version supplies native concepts. I hadn't understood it until i came across C++1x's concepts and how they allow putting different yet unrelated types together. Imagine you have a Range interface. You can model that with two ways. One is a subtype relationship:

class Range {
    virtual Iterator * begin() = 0;
    virtual Iterator * end() = 0;

    virtual size_t size() = 0;
};

Of course, every class that derives from that implements the Range interface and can be used with your functions. But now you see it is limited. What about an array? It's a range too!

T t[N];

begin() => t
end() => t + size()
size() => N

Sadly, you cannot derive an array from that Range class implementing that interface. You need an extra method (overloading). And what about third party containers? A user of your library might want to use their containers together with your functions. But he can't change the definition of their containers. Here, concepts come into game:

auto concept Range<typename T> {
    typename iterator;
    iterator T::begin();
    iterator T::end();
    size_t T::size();
}

Now, you say something about the supported operations of some type which can be fulfilled if T has the appropriate member functions. In your library, you would write the function generic. This allows you accept any type so long as it supports the required operations:

template<Range R>
void assign(R const& r) {
    ... iterate from r.begin() to r.end(). 
}

It's a great kind of substitutability. Any type will fit the bill that adheres to the concept, and not only those types that actively implement some interface. The next C++ Standard goes further: It defines a Container concept that will be fit by plain arrays (by something caled concept map that defines how some type fits some concept) and other, existing standard containers.

The reason why I bring this up is because I have a templated container, where the containers themselves have a hierarchical relationship. I would like to write algorithms that use these containers without caring about which specific container it is. Also, some algorithms would benefit from knowing that the template type satisfied certain concepts (Comparable, for example).

You can actually do both with templates. You can keep having your hierarchical relationship to share code, and then write the algorithms in a generic fashion. For example, to communicate that your container is comparable. That's like standard random-access/forward/output/input iterator categories are implemented:

// tag types for the comparator cagetory
struct not_comparable { };
struct basic_comparable : not_comparable { };

template<typename T>
class MyVector : public BasicContainer<T> {
    typedef basic_comparable comparator_kind;
};

/* Container concept */
T::comparator_kind: comparator category

It's a reasonable simple way to do it, actually. Now you can call a function and it will forward to the correct implementation.

template<typename Container>
void takesAdvantage(Container const& c) {
    takesAdvantageOfCompare(c, typename Container::comparator_kind());
}

// implementation for basic_comparable containers
template<typename Container>
void takesAdvantage(Container const& c, basic_comparable) {
    ...
}

// implementation for not_comparable containers
template<typename Container>
void takesAdvantage(Container const& c, not_comparable) {
    ...
}

There are actually different techniques that can be used to implement that. Another way is to use boost::enable_if to enable or disable different implementations each time.

Johannes Schaub - litb
C++1x? Does that mean they've given up releasing the new standard in this decade or are you talking about future C++ development?
jpalecek
http://www.research.att.com/~bs/C++0xFAQ.html#concepts
jmucchiello
jpalecek, they want to release it in 2010. i have the habit of calling it c++1x :)
Johannes Schaub - litb
jpalecek, if you are interested in reading further, take http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2008/n2800.pdf and read chapter 14.9 . in 13.1.6 you find the container concept. have fun :)
Johannes Schaub - litb
I was only concerned about the identifier, C++1x might suggest you talk about the next (beyond c++0x) standard. BTW, the container concept is in 23.1.6, not 13.1.6
jpalecek
silly litb. of course. s,13.1.6,23.1.6,g :)
Johannes Schaub - litb
A: 

As a simple example of the difference between compile-time and run-time polymorphism consider the following code:

template<typename tType>
struct compileTimePolymorphism
{ };

// compile time polymorphism,
// you can describe a behavior on some object type
// through the template, but you cannot interchange 
// the templates
compileTimePolymorphism<int> l_intTemplate;
compileTimePolymorphism<float> l_floatTemplate;
compileTimePolymorphism *l_templatePointer; // ???? impossible

struct A {};
struct B : public A{};
struct C : public A{};

// runtime polymorphism 
// you can interchange objects of different type
// by treating them like the parent
B l_B;
C l_C:
A *l_A = &l_B;
l_A = &l_C;

Compile-time polymorphism is a good solution when the behavior of one object depends on some other object. Run-time polymorphism is necessary where the behavior of an object needs to be changed.

The two can be combined by defining a template which is polymorphic:

template<typename tType>
struct myContainer : public tType
{};

The question then is where the behavior of your container needs to change (runtime polymorphism), and where the behavior depends on the objects it contains (compile time polymorphism).

SmokingRope