C++ is basically C with classes.
C++ is much more than that! Is a multi-paradigm language and one that's capable of blending these paradigms harmoniously together to create code that would have been otherwise inferior had we used a single paradigm for the same problem. Skipping more subtle differences (ex: greater type safety, better variable scoping), let's consider just the multi-paradigm aspect.
Let's assume that we're writing a linked list and pretend that no linked list implementation, algorithms, or utilities of any sort to aid us already existed in the standard C++ library or boost.
We decide to use object-oriented techniques and generics and successfully create a list class capable of doing anything one would ever want to do to a linked list: insert elements to it, remove elements from it, and iterate through it.
Now we find that we want to be able to find elements in the list. What do we do?
Given primarily object-oriented techniques at our disposal (only classes), we might be tempted to:
- simply add a search function directly to the list class. Yet what happens when we later discover that we also want to reverse the list and do all kinds of things with it? The result will be a monolithic class which goes through endless maintenance and revisions. Moreover, if the list implementation details change, this monolithic list might require a complete rewrite!
- inherit from the list to make super list classes with more functions. This is possibly even worse than #1 as it is prone to slicing issues, incorrect downcasting, and so on. It is also just as vulnerable potentially to breaking on rewrites.
- Use composition: write a wrapper class which stores the list as a member and provides additional functions to the list. This is far superior to #1 and #2, but requires an interface which duplicates the existing list's functionality. Most people would not choose this approach out of the tedium involved with it.
- Do the same at #3 but store a reference/pointer to the list and add the additional functions on top along with an accessor to the original list. This is also far superior to #1 and #2, but requires a lengthier syntax on the client to access the original list's interface.
- write a separate class which provides algorithms to act on lists being passed in. This is the best approach given what we have, but it is primarily a procedural approach regardless of whether we choose to put these functions in a separate class. Unfortunately, most people working in languages that are more strictly object-oriented like Java do not apply this solution enough, instead favoring solutions like #1 and #2. Probably the creation of a new class just to hold new functions is counter-intuitive to most people.
In C++, a multi-paradigm language, the best solution is very obvious. Simply write a separate, free function for such auxiliary functions! When we do this, we get all the benefits of #5: no tight-coupling and therefore no vulnerabilities to list rewrites. The implementation could even change from a singly-linked list to a doubly-linked list or even a completely different sequence type, it doesn't matter as long as the public interface doesn't change. Unlike #3/#4 (composition approach), we don't have to duplicate the public interface of the list.
Finally, when we do things this way, we realize with additional paradigms like generic functional programming, that we can even generalize these separate list algorithms to work on a lot more than list types and even accommodate predicate-based searches without any abstraction cost.
Exception-handling is slower than error-handling (even for non-exceptional cases).
This may be the case on certain compiler implementations, but we need fair comparisons! How many times have we seen a real world system of any scale that properly checks for all possible errors (every possible malloc failure, just for one example) that the program can ever encounter and properly propagates them down the call stack manually to the error-handling site? Almost every function in the system would need to do some error handling to do this as thoroughly as exceptions, and we never find this kind of thoroughness in real world projects of any large scale. Of course any ideally-written code to thoroughly deal with all possible errors is going to be slower than code which ignores a lot of errors and doesn't thoroughly handle them.
Exception-handling is optional with C++ and can simply be turned off without side effects.
Consider what happens with std::list<T, Alloc>::push_back
when called to insert an object of type Foo
. We invoke the copy ctor of Foo
when we do this, and it could throw (ex: call to operator new threw std::bad_alloc
). Good implementations of the standard library will deal with this case very gracefully, rolling back everything done so far in the push_back
as though it were a transaction, yielding a valid list state as though we never inserted anything to the list. Turn off exception-handling and suddenly we have no way to get this kind of robust behavior. It's the same not just for the C++ standard library, but for all kinds of other C++ libraries that use, as an example, operator new (and not the nothrow version) like boost.
C++ hides code more than C.
Short of exception-handling and implicit constructors (which should generally be avoided), this is often not the case. This is a common argument made by people who lack experience using C++. Given a section of code, a novice might have a difficult time telling you where a destructor will be called given non-exceptional execution flow. An experienced C++ programmer would have no problem pointing this out.
C programmers are used to the idea that operators are not functions. C++ programmers are used to this and can easily point out where an operator will be invoking user-defined functionality. One only need look at whether the operands involved are PODs or not.
To the contrary, a lot of C systems hide code a lot more effectively from the programmer than C++. Consider the C preprocessor, for instance. Nothing hides code better than macros, as we cannot even practically trace through the code in a macro with a debugger.
Using directives should be avoided (ex: using namespace std;)
Sutter argues that using directives are what makes namespaces practical, and for good reasons. Without using directives, we have code that's very vulnerable to ADL-related problems. Virtually everyone agrees that argument-dependent lookup is very evil with the way it's inconsistently implemented across compilers, and using directives mitigate that problem far more effectively than, say, using declarations (ex: using std::cout
but forgetting to write using std::operator<<
or using std::list
but forgetting using std::swap
).
Inlining one-liners like accessor functions will make them faster.
Wrong and not even close. For all practical purposes, inlining such functions can reduce code bloat (provided the single line requires less instructions than pushing arguments to the stack, calling the function, popping, and returning), but even then inlining all instances of such code can slow things down as it can cause the compiler to try to equally optimize less commonly-executed code branches with ones that are commonly executed (the compiler can't read your mind about the desired runtime behavior of your code). Inlining should always be done only with the aid of a profiler where one can appropriately analyze the most commonly executed branches of code and selectively inline to help the compiler, even for one-liner functions.