views:

751

answers:

8

Possible Duplicate:
What is the difference between Θ(n) and O(n)?

It seems to me like when people talk about algorithm complexity informally, they talk about big-oh. But in formal situations, I often see big-theta with the occasional big-oh thrown in. I know mathematically what the difference is between the two, but in english, in what situation would using big-oh when you mean big-theta be incorrect, or vis versa (an example algorithm would be appreciated)?

Bonus: why do people seemingly always use big-oh when talking informally?

+2  A: 

Because there are algorithms whose best-case is quick, and thus it's technically a big O, not a big Theta.

Big O is an upper bound, big Theta is an equivalence relation.

Alexandre C.
+11  A: 

Big-O is an upper bound.

Big-Theta is a tight bound, i.e. upper and lower bound.

When people only worry about what's the worst that can happen, big-O is sufficient; i.e. it says that "it can't get much worse than this". The tighter the bound the better, of course, but a tight bound isn't always easy to compute.

See also

Related questions


The following quote from Wikipedia also sheds some light:

Informally, especially in computer science, the Big O notation often is permitted to be somewhat abused to describe an asymptotic tight bound where using Big Theta notation might be more factually appropriate in a given context.

For example, when considering a function T(n) = 73n3+ 22n2+ 58, all of the following are generally acceptable, but tightness of bound (i.e., bullets 2 and 3 below) are usually strongly preferred over laxness of bound (i.e., bullet 1 below).

  1. T(n) = O(n100), which is identical to T(n) ∈ O(n100)
  2. T(n) = O(n3), which is identical to T(n) ∈ O(n3)
  3. T(n) = Θ(n3), which is identical to T(n) ∈ Θ(n3)

The equivalent English statements are respectively:

  1. T(n) grows asymptotically no faster than n100
  2. T(n) grows asymptotically no faster than n3
  3. T(n) grows asymptotically as fast as n3.

So while all three statements are true, progressively more information is contained in each. In some fields, however, the Big O notation (bullets number 2 in the lists above) would be used more commonly than the Big Theta notation (bullets number 3 in the lists above) because functions that grow more slowly are more desirable.

polygenelubricants
"a tight bound is sometimes harder to compute", and often useless too, which answers the "bonus" question ;)
Alexandre C.
Nitpick: We can't talk about 'matrix multiplication' being O(N^3). Perhaps you should throw in the words 'best known algorithm' in there.
Moron
@Moron - why can't we? saying matrix multiplication is `O(n^3)`, `O(n^4)` and any `O(n^k)` for `k > 3` wouldn't be wrong.
IVlad
@IVlad: What I meant is: matrix multiplication is a concept, not an algorithm. Since it was kind of clear from the context, I just said 'nitpick' :-)
Moron
@Moron - oh, my bad for not getting it :).
IVlad
A: 

I have seen Big Theta, and I'm pretty sure I was taught the difference in school. I had to look it up though. This is what Wikipedia says:

Big O is the most commonly used asymptotic notation for comparing functions, although in many cases Big O may be replaced with Big Theta Θ for asymptotically tighter bounds.

Source: Big O Notation#Related asymptotic notation

I don't know why people use Big-O when talking formally. Maybe it's because most people are more familiar with Big-O than Big-Theta? I had forgotten that Big-Theta even existed until you reminded me. Although now that my memory is refreshed, I may end up using it in conversation. :)

Vivin Paliath
So, people talk about Big O because most people talk about Big O unless they talk about Big Theta? Not exactly helpful.
Chris Lively
@Chris true. Wikipedia seems contradictory on the topic; the two seem to be semantically different, if subtly.
Vivin Paliath
A: 

Bonus: why do people seemingly always use big-oh when talking informally?

Because in big-oh, this loop:

for i = 1 to n do
    something in O(1) that doesn't change n and i and isn't a jump

is O(n), O(n^2), O(n^3), O(n^1423424). big-oh is just an upper bound, which makes it easier to calculate because you don't have to find a tight bound.

The above loop is only big-theta(n) however.

What's the complexity of the sieve of eratosthenes? If you said O(n log n) you wouldn't be wrong, but it wouldn't be the best answer either. If you said big-theta(n log n), you would be wrong.

IVlad
The above loop is only big-theta(n) however: nope. It would be if there was a theta inside the loop.
Alexandre C.
@Alexandre C. - why? `O(1)` means a constant number of operations. Assuming the `something` isn't a jump, why isn't it `big-theta(n)`?
IVlad
@IVlad: because 0 is O(1).
Alexandre C.
@Alexandre C. - so? even if `something` is `nothing`, `i` still gets incremented `n` times.
IVlad
@IVlad: okay, I got sucked by my mathematical training ;)
Alexandre C.
Isn't the loop described just O(n) and not O(n^2) etc?
Amir Afghani
@Alexandre C. - there were some problems with the loop though, now it's hopefully unnitpickable :). @Amir Afghani - no, they are all valid. See polygenelubricants' answer for more info.
IVlad
+3  A: 

I'm a mathematician and I have seen and needed big-O, big-Theta, and big-Omega notation time and again, and not just for complexity of algorithms. As people said, big-Theta is a two-sided bound. Strictly speaking, you should use it when you want to explain that that is how well an algorithm can do, and that either that algorithm can't do better or that no algorithm can do better. For instance, if you say "Sorting requires Θ(n(log n)) comparisons for worst-case input", then you're explaining that there is a sorting algorithm that uses O(n(log n)) comparisons for any input; and that for every sorting algorithm, there is an input that forces it to make Ω(n(log n)) comparisons.

Now, one narrow reason that people use O instead of Ω is to drop disclaimers about worst or average cases. If you say "sorting requires O(n(log n)) comparisons", then the statement still holds true for favorable input. Another narrow reason is that even if one algorithm to do X takes time Θ(f(n)), another algorithm might do better, so you can only say that the complexity of X itself is O(f(n)).

However, there is a broader reason that people informally use O. At a human level, it's a pain to always make two-sided statements when the converse side is "obvious" from context. Since I'm a mathematician, I would ideally always be careful to say "I will take an umbrella if and only if it rains" or "I can juggle 4 balls but not 5", instead of "I will take an umbrella if it rains" or "I can juggle 4 balls". But the other halves of such statements are often obviously intended or obviously not intended. It's just human nature to be sloppy about the obvious. It's confusing to split hairs.

Unfortunately, in a rigorous area such as math or theory of algorithms, it's also confusing not to split hairs. People will inevitably say O when they should have said Ω or Θ. Skipping details because they're "obvious" always leads to misunderstandings. There is no solution for that.

Greg Kuperberg
"O when they should have said Ω" - whoa! That's a _serious_ error! Are you sure lots of people actually do this? (disclosure: I have done it on stackoverflow by mistake).
polygenelubricants
I don't know about "lots", but sure, I've seen it and I've also done it (at least informally). I agree that it's often unacceptably confusing, and it should certainly be avoided in a research paper. But informally, hey, we're human and we rely on our ability to repair error.
Greg Kuperberg
+1: I agree. @polygenelubricants: I have seen a lot of people talk about O when they mean Omega or even theta. Perhaps you should take a look at one of the deleted answers here: http://stackoverflow.com/questions/3230104/does-big-o-measure-memory-requirments-or-just-speed/3230139
Moron
A: 

There are a lot of good answers here but I noticed something was missing. Most answers seem to be implying that the reason why people use Big O over Big Theta is a difficulty issue, and in some cases this may be true. Often a proof that leads to a Big Theta result is far more involved than one that results in Big O. This usually holds true, but I do not believe this has a large relation to using one analysis over the other.

When talking about complexity we can say many things. Big O time complexity is just telling us what an algorithm is guarantied to run within, an upper bound. Big Omega is far less often discussed and tells us the minimum time an algorithm is guarantied to run, a lower bound. Now Big Theta tells us that both of these numbers are in fact the same for a given analysis. This tells us that the application has a very strict run time, that can only deviate by a value asymptoticly less than our complexity. Many algorithms simply do not have upper and lower bounds that happen to be asymptoticly equivalent.

So as to your question using Big O in place of Big Theta would technically always be valid, while using Big Theta in place of Big O would only be valid when Big O and Big Omega happened to be equal. For instance insertion sort has a time complexity of Big О at n^2, but its best case scenario puts its Big Omega at n. In this case it would not be correct to say that its time complexity is Big Theta of n or n^2 as they are two different bounds and should be treated as such.

Daynew
Using O instead of Omega is not always valid. _"Any comparison algorithm needs O(nlogn) compares for sorting"_. How is that technically valid?
Moron
+2  A: 

Because my keyboard has an O key.
It does not have a Θ or an Ω key.

I suspect most people are similarly lazy and use O when they mean Θ because it's easier to type.

Ax
Time for a [keyboard](http://world.std.com/~jdostale/kbd/SpaceCadet3.jpeg) [upgrade](http://www.google.com/images?q=space-cadet+keyboard)!
Roger Pate
A: 

One reason why big O gets used so much is kind of because it gets used so much. A lot of people see the notation and think they know what it means, then use it (wrongly) themselves. This happens a lot with programmers whose formal education only went so far - I was once guilty myself.

Another is because it's easier to type a big O on most non-Greek keyboards than a big theta.

But I think a lot is because of a kind of paranoia. I worked in defence-related programming for a bit (and knew very little about algorithm analysis at the time). In that scenario, the worst case performance is always what people are interested in, because that worst case might just happen at the wrong time. It doesn't matter if the actually probability of that happening is e.g. far less than the probability of all members of a ships crew suffering a sudden fluke heart attack at the same moment - it could still happen.

Though of course a lot of algorithms have their worst case in very common circumstances - the classic example being inserting in-order into a binary tree to get what's effectively a singly-linked list. A "real" assessment of average performance needs to take into account the relative frequency of different kinds of input.

Steve314