views:

340

answers:

6

I myself am convinced that in a project I'm working on signed integers are the best choice in the majority of cases, even though the value contained within can never be negative. (Simpler reverse for loops, less chance for bugs, etc., in particular for integers which can only hold values between 0 and, say, 20, anyway.)

The majority of the places where this goes wrong is a simple iteration of a std::vector, often this used to be an array in the past and has been changed to a std::vector later. So these loops generally look like this:

for (int i = 0; i < someVector.size(); ++i) { /* do stuff */ }

Because this pattern is used so often, the amount of compiler warning spam about this comparison between signed and unsigned type tends to hide more useful warnings. Note that we definitely do not have vectors with more then INT_MAX elements, and note that until now we used two ways to fix compiler warning:

for (unsigned i = 0; i < someVector.size(); ++i) { /*do stuff*/ }

This usually works but might silently break if the loop contains any code like 'if (i-1 >= 0) ...', etc.

for (int i = 0; i < static_cast<int>(someVector.size()); ++i) { /*do stuff*/ }

This change does not have any side effects, but it does make the loop a lot less readable. (And it's more typing.)

So I came up with the following idea:

template <typename T> struct vector : public std::vector<T>
{
    typedef std::vector<T> base;

    int size() const     { return base::size(); }
    int max_size() const { return base::max_size(); }
    int capacity() const { return base::capacity(); }

    vector()                  : base() {}
    vector(int n)             : base(n) {}
    vector(int n, const T& t) : base(n, t) {}
    vector(const base& other) : base(other) {}
};

template <typename Key, typename Data> struct map : public std::map<Key, Data>
{
    typedef std::map<Key, Data> base;
    typedef typename base::key_compare key_compare;

    int size() const     { return base::size(); }
    int max_size() const { return base::max_size(); }

    int erase(const Key& k) { return base::erase(k); }
    int count(const Key& k) { return base::count(k); }

    map() : base() {}
    map(const key_compare& comp) : base(comp) {}
    template <class InputIterator> map(InputIterator f, InputIterator l) : base(f, l) {}
    template <class InputIterator> map(InputIterator f, InputIterator l, const key_compare& comp) : base(f, l, comp) {}
    map(const base& other) : base(other) {}
};

// TODO: similar code for other container types

What you see is basically the STL classes with the methods which return size_type overridden to return just 'int'. The constructors are needed because these aren't inherited.

What would you think of this as a developer, if you'd see a solution like this in an existing codebase?

Would you think 'whaa, they're redefining the STL, what a huge WTF!', or would you think this is a nice simple solution to prevent bugs and increase readability. Or maybe you'd rather see we had spent (half) a day or so on changing all these loops to use std::vector<>::iterator?

(In particular if this solution was combined with banning the use of unsigned types for anything but raw data (e.g. unsigned char) and bit masks.)

+3  A: 

Yes, i agree with Richard. You should never use 'int' as the counting variable in a loop like those. The following is how you might want to do various loops using indices (althought there is little reason to, occasionally this can be useful).

Forward

for(std::vector<int>::size_type i = 0; i < someVector.size(); i++) {
    /* ... */
}

Backward

You can do this, which is perfectly defined behaivor:

for(std::vector<int>::size_type i = someVector.size() - 1; 
    i != (std::vector<int>::size_type) -1; i--) {
    /* ... */
}

Soon, with c++1x (next C++ version) coming along nicely, you can do it like this:

for(auto i = someVector.size() - 1; i != (decltype(i)) -1; i--) {
    /* ... */
}

Decrementing below 0 will cause i to wrap around, because it is unsigned.

`But unsigned will make bugs slurp in`

That should never be an argument to make it the wrong way (using 'int').

Why not use std::size_t above?

The C++ Standard defines in 23.1 p5 Container Requirements, that T::size_type , for T being some Container, that this type is some implementation defined unsigned integral type. Now, using std::size_t for i above will let bugs slurp in silently. If T::size_type is less or greater than std::size_t, then it will overflow i, or not even get up to (std::size_t)-1 if someVector.size() == 0. Likewise, the condition of the loop would have been broken completely.

Johannes Schaub - litb
The appropriate type to use would be std::vector<T>::size_type, not std::size_t.
ChrisN
You are right. I confused it with string::size_type , which is always std::size_t. I will change it accordingly
Johannes Schaub - litb
+2  A: 

Don't derive publicly from STL containers. They have nonvirtual destructors which invokes undefined behaviour if anyone deletes one of your objects through a pointer-to base. If you must derive e.g. from a vector, do it privately and expose the parts you need to expose with using declarations.

Here, I'd just use a size_t as the loop variable. It's simple and readable. The poster who commented that using an int index exposes you as a n00b is correct. However, using an iterator to loop over a vector exposes you as a slightly more experienced n00b - one who doesn't realize that the subscript operator for vector is constant time. (vector<T>::size_type is accurate, but needlessly verbose IMO).

fizzer
as ChrisN correctly commented, vector<T>::size_t is not necassarily std::size_t . so if you would do size_t i = someVector.size() - 1 , you do not necassarily endup with (size_t)-1 . that means your loop condition then is broken.
Johannes Schaub - litb
No, size_type for the default allocator is always size_t.
fizzer
yes, but vector<T>::size_type is NOT defined in terms of the default allocator in my copy of the standard (i did this mistake too in my own commentar above). It's defined in terms of the Container requirements as a implementation defined unsigned integral type.
Johannes Schaub - litb
That's a good spot - I missed that, thanks. But as size_t will never be narrower than size_type, I don't see the problem.
fizzer
See my changed answer on where problems will arise if you loop backwards :) And I've tried to look up where you got it from that size_t is never narrower as size_type. Haven't found it :)
Johannes Schaub - litb
size_t and ptrdiff_t are both at least 65535. Though that doesn't say anything about their real values in implementations of c89 / c++03. ptrdiff_t positive max could be 2^31-1, while size_t max could be 65535. This then would mean size_t is narrower then. Any ideas?
Johannes Schaub - litb
OK, I see it if you loop backwards and the vector is empty. Nobody told me we were looping backwards.
fizzer
Oh you replied again - look I'm half way through varnishing a door. I will get back to you again when I can give it proper attention, 30 mins
fizzer
One way to get there is size_t is large enough to hold size of any object, and vector is guaranteed to be backed by contiguous storage, so size_type <= size_t
fizzer
I think that's a good point. Tho formally, using std::size_t is wrong, by your example in reality std::size_t will always be at least as large as ptrdiff_t (so it works for forward looping). Tho the problem with looping backward remains :p
Johannes Schaub - litb
+1  A: 

vector size returns a size_t var so just change int for size_t and it should be fine.

Richards answer is more correct except that its alot of work for a simple loop.

Lodle
A: 

You're overthinking the problem.

Using a size_t variable is preferable, but if you don't trust your programmers to use unsigned correctly, go with the cast and just deal with the ugliness. Get an intern to change them all and don't worry about it after that. Turn on warnings as errors and no new ones will creep in. Your loops may be "ugly" now, but you can understand that as the consequences of your religious stance on signed versus unsigned.

Dan Olson
+1  A: 

Definitely use an iterator. Soon you will be able to use the 'auto' type, for better readability (one of your concerns) like this:

for (auto i = someVector.begin();
     i != someVector.end();
     ++i)
Tim Weiler
spot the typo! :-)
rq
Heh. As soon as this syntax is available in mainstream Linux distributions (there is already is maybe?), MinGW and Visual Studio, I'll definitely gonna use this. Looks so much better then std::vector<>::iterator it = someVector.begin() etc...
Tobi
oops slots of typos in that too, well suppose the meaning is clear enough :-)
Tobi
No "auto" in GCC yet, not even bleeding edge betas AFAIK. http://gcc.gnu.org/gcc-4.4/cxx0x_status.html
rq
+2  A: 

While I don't think "use iterators, otherwise you look n00b" is a good solution to the problem, deriving from std::vector appears much worse than that.

First, developers do expect vector to be std:.vector, and map to be std::map. Second, your solution does not scale for other containers, or for other classes/libraries that interact with containers.

Yes, iterators are ugly, iterator loops are not very well readable, and typedefs only cover up the mess. But at least, they do scale, and they are the canonical solution.

My solution? an stl-for-each macro. That is not without problems (mainly, it is a macro, yuck), but it gets across the meaning. It is not as advanced as e.g. this one, but does the job.

peterchen
BTW, VC++ has "for each" keyword out of the box. It's not portable, of course, but it's there if you don't care of portability.