views:

718

answers:

4

the C++ standard seems to make no statement regarding side-effects on capacity by either resize(n), with n < size(), or clear().

It does make a statement about amortized cost of push_back and pop_back - O(1)

I can envision an implementation that does the usual sort of capacity changes ala CLRS Algorithms (eg double when enlarging, halve when decreasing size to < capacity()/4). (Cormen Lieserson Rivest Stein)

Does anyone have a reference for any implementation restrictions?

+6  A: 

Calling resize() with a smaller size has no effect on the capacity of a vector. It will not free memory.

The standard idiom for freeing memory from a vector is to swap() it with an empty temporary vector: std::vector<T>().swap(vec);. If you want to resize downwards you'd need to copy from your original vector into a new local temporary vector and then swap the resulting vector with your original.

mattnewport
You're answering the wrong question. He means: "If I have a container with 50 elements and I `resize()` to 20, what happens?"
GMan
The way I read it he's asking about the impact on memory usage - he specifically asks what the effect on capacity is of resize. The standard doesn't specify the result in that case but the only reason to ask I can think of is a desire to free unused memory. The swap with temporary trick is the idiomatic way of achieving that.
mattnewport
The standard does specify the result by not specifying a decrease of capacity() for these operations. Therefore it can't decrease.
MSalters
+5  A: 

Actually, the standard does specify what should happen:

This is from vector, but the theme is the same for all the containers (list, deque, etc...)

23.2.4.2 vector capacity [lib.vector.capacity]

void resize(size_type sz, T c = T());

6) Effects:

if (sz > size())
    insert(end(), sz-size(), c);
else if (sz < size())
    erase(begin()+sz, end());
else
    ; //do nothing

That is to say: If the size specified to resize is less than the number of elements, those elements will be erased from the container. Regarding capacity(), this depends on what erase() does to it.

I cannot locate it in the standard, but I'm pretty sure clear() is defined to be:

void clear()
{
    erase(begin(), end());
}

Therefore, the effects clear() has on capacity() is also tied to the effects erase() has on it. According to the standard:

23.2.4.3 vector modifiers [lib.vector.modifiers]

iterator erase(iterator position);
iterator erase(iterator first, iterator last);

4) Complexity: The destructor of T is called the number of times equal to the number of the elements erased....

This means that the elements will be destructed, but the memory will remain intact. erase() has no effect on capacity, therefore resize() and clear() also have no effect.

GMan
Perrhaps a bit too much standardese? I agree with your conclusion, but a lot of that does not really address the question.
anon
Agreed. I have clipped it down a bit. :)
GMan
You clipped it a little too much, the "Effects" paragraph is for `resize` not `reserve`. ;)
avakar
I think it's time for GMan to go to sleep. -_- I'm failing at communication today, if someone finds a better way to say what I'm trying to say, please feel free to edit :X
GMan
+2  A: 

The capacity will never decrease. I'm not sure if the standard states this explicitly, but it is implied: iterators and references to vector's elements must not be invalidated by resize(n) if n < capacity().

avakar
A: 

Thank you, but I know very well how to "shrink to fit." Had I been interested in that I would have asked explicitly about that.

The question was actually a performance question - I needed a guarantee that capacity() was monotonically increasing under operations push_pack, pop_back, clear, and resize. (ie no reallocation was for done for clear resize downward).

I knew that Dinkumware GCC and SGI stl did not "recapacitize" downward but did not quite find in the standard a statement to that effect. As Gman stated "I cannot locate it in the standard, but I'm pretty sure clear() is defined to be: ..."

I think the behaviour of clear is to be ascertained from table 67 in the sequences section on page 469

avakar's "iterators and references to vector's elements must not be invalidated by resize(n) if n < capacity()." ["and the iterator points to an element not destroyed by the resize" as the iterator while not made memory inaccessable is certainly invalidated when the element is destroyed] and Gmans "4) Complexity: The destructor of T is called the number of times equal to the number of the elements erased...." have answered the question.

My concern was that was that the requirement was O(n) versus exactly n.

Thank you all for the extra sets of eyes on the standard.

Careful. You said that the `capacity()` should only increase by one for each push_back operation? This is not possible without reallocation. `size()` will increase by one, but `capacity()` will remain the same until `size() == capacity()`, whereupon `capacity()` will increase will reallocation.
GMan
It's in fact not possible for the given performance bounds (amortized O(1) push_back). Growing capacity() by 1 for each push_back means you need to copy all elements. That's O(N) per allocation, every time, and thus push_back would be at least O(N) amortized.
MSalters
eh?where did I ever say:"You said that the capacity() should only increase by one for each push_back operation" ???????????????????????????????????????
f monotonically increasing defined: x<=y implies f(x) <= f(y)f strictly monotonically increasing defined: x<y implies f(x) < f(y)if that was your confusion