+5  A: 

No, n is not the number of seconds, it's the number of items to process.

O(n) means that the time to process the items is linear to the number of items.

O(n²) means that the time to proess the items is relative to the square of the number of items. If you double the number of items, the processing time will be four times longer.

See: Big O notation

The table assumes that there is a fixed amount of work per item, although the big O notation only specifies how an algorithm reacts to a change in number of items, it doesn't tell you anything about how much work there is per item.

Edit:
The values along the x axis of the table are just approximations based on the assumption that the work per item is the same. For example the value 3000 for O(n²) is rounded from the square root of 10 millions, which is ~3162.28. The cubic root of 10 millions is not 200, it's ~215.44.

In a real situatuon, two algorithms rarely do the same amount of work per item. An algorithm with O(log n) typically does more work per item than an O(n) algorithm for the same purpose, but it's still preferrable in most situations because it scales a lot better.

Guffa
@Guffa: Thank you for pointing that out! # I fixed it in my question. --- The main problem is still unsolved about how you can deduce, for instance O(n^2).
Masi
It's just math, based on the assumption that the amount of work per item is the same. See my edit above.
Guffa
@Thank you for your update! --- The key point in your answer is "the assumption that the work per item is the same".
Masi
+1  A: 

I think that this table gives simply some very approximate illustration how big n can be for different kind of complexities when you have fixed time (1 second, 1 minute, 1 hour, 1 day or 1 year) at your disposal.

For example O(n^3):

 1 second: 200^3 = 8 000 000 (roughly 10 million, given in O(n) column)
 1 minute: 850^3 = 614 125 000 (roughly 600 million, given in O(n) column))
 1 hour: 3000^3 = 27 000 000 000 (somewhat roughly 35 billion, given in O(n) column)

As you can see, the number are very rough approximations. It seems that author has wanted to use nice round numbers to illustrate his point.

Juha Syrjälä
@Juha: Thank you very much for your answer! -- The math makes sense now for me :)
Masi
Your answer raised a new question: If n = 200 deduced from O(n^3). Why is the value in O(n) then 10M, not 200?
Masi
It's not n that is 200, the values in the table is not n. The n value is for example how many items you have in a collection. The value in the table is for example how many items you can read from the collection per time unit, depending on how efficient the algorithm is at reading items. It's pretty hard to make anything out of the table though, as the O notation doesn't measure efficiency itself but rather changes in efficiency...
Guffa
@Guffa: So the key point is that O -notation measures changes in the rate, not the actual rate, similarly as we measure enthalpy by changes in Chemistry.
Masi
@Guffa: It seems that the benefit of O notation is when there is a change in the algorithm such that we can estimate the best algorithm. --- Can we measure the code's effectiveness by only pencil and paper, by comparing the rate of an algorithm to a base value?
Masi
+1  A: 

Hi,

if you can do 10,000,000 ops per second, then when you set n = 500,000 and calculate n * log(n) = 500,000 * log2(500,000) = 500,000 * 18 = 9,000,000 ops which is roughly 10,000,000 for the purposes of the "seconds" classification.

Similarly, with n = 3,000 you get n^2 = 9,000,000. So on every line the number of operations is roughly the same.

antti.huima
@Antti: So you take the base value of 10M from O(n). You then try to find an integer solution for n such that we get 10M. --- You made an assumption that the base of log is 2. --- Is it a standard in CS that we use the base of 2, not for instance 10?
Masi
Well the fancy thing about big O Notation is that any logarithm is the same in big O. This is due to the definition of big O which is asymptotic domination, when you say that some algorithm is O(n) or O(n log(n)) what you're really saying is that for some positive constant c, the algorithm your algorithm's complexity is <= c*n.From this we can deduce that log_2(n) is O(log_10(n)) and vice versa, because log_2(n)=log_10(n)*log_10(2).In general though log is presumed to be of base 2, because this is how must algorithms scale, although if you're analyzing something like trisort it's log_3
Jacob Schlather
@Masi For asymptotic complexity it doesn't matter if you take log10 or log2 but CS people just use log2 out of their sleeve because (1) binary search is actually log2 and (2) they anyway think in terms of powers of two.
antti.huima
@Liberalkid: The following is not true: log_2(n)=log_10(n)*log_10(2). --- You can easily deduce then a contradiction by getting 1/lg_10(2) = lg_10(2) => ... => 10 = 2 which is a contradiction.
Masi
log10 and log2 are values which presents the upper boundaries. It seems that the values of them are pretty close to each other so we can use them both, but we prefer base 2. --- The relations between functions such as log, exp or polynomial are important. This is one reason why we can use log_10 too, since lg_2 scales similarly as lg_10.
Masi
log_2(n) = log_10(n) * log_2(10)
sth
@sth: True - thank you for the correction!
Masi