O notation describes a limiting behaviour of a function, a worst case for an algorithm. Usually it's sufficient to compare different algorithms for the same task, like sorting algorithms by that function.
I'd say, for almost all algorithms (except O(1) algorithms ;) ) there's always some input data that makes the algorithm terminate in less time (or with less memory consumption) then indicated by the describing O notation for that algorithm.
Assume we have a counting algorithm like that:
private int counter(int n) {
int counter;
for (int i = 0; i < 2; i++) {
counter = 0;
for (int i = 0; i < n; i++) {
counter++;
}
}
return counter;
}
The growth is linear, so the O notation for this counter is O(n) (I'm only looking at steps, not memory). You could argue that, hey, we're counting twice, and write O(2n) instead. True. You could even write O(2n+c) to indicate that we need some extra steps (time) to create and initialize the local variable.
Here is an improved implementation, that still is linear (O(n)) but terminates significantly faster:
private int counter(int n) {
int counter =0;
for (int i = 0; i < n; i++) {
counter++;
}
return counter;
}
Both can be described as O(n) to indicate linear growth. That may be sufficient, for example to compare these implementations with an O(n^2) or O(1) implementation of that counter. But to compare the 'linear' versions A and B, we need to be more precise and identify the first one as O(2n) and the second as O(n). Now the comparison of the to O notation values gives the expected result: implementation B is 'better'.