The phrases minimum time and maximum time are a bit misleading here. When we talk about big O notations, it's not the actual time we are interested in, it is how the time increases when our input size gets bigger. And it's usually the average or worst case time we are talking about, not best case, which usually is not meaningful in solving our problems.
Using the array search in the accepted answer to the other question as an example. The time it takes to find a particular number in list of size n is n/2 * some_constant in average. If you treat it as a function f(n) = n/2*some_constant
, it increases no faster than g(n) = n
, in the sense as given by Charlie. Also, it increases no slower than g(n)
either. Hence, g(n)
is actually both an upper bound and a lower bound of f(n)
in Big-O notation, so the complexity of linear search is exactly n, meaning that it is Theta(n).
In this regard, the explanation in the accepted answer to the other question is not entirely correct, which claims that O(n) is upper bound because the algorithm can run in constant time for some inputs (this is the best case I mentioned above, which is not really what we want to know about the running time).