views:

4032

answers:

6

Sometimes I see Θ(n) with the strange Θ symbol with something in the middle of it, and sometimes just O(n). Is it just laziness of typing because nobody knows how to type this symbol, or does it mean something different?

+74  A: 

To explain it to your grandma:

If an algorithm is of Θ(g(n)), it means that the running time of the algorithm as 'n' (input size) gets larger is proportional to g(n).

If an algorithm is of O(g(n)), it means that the running time of the algorithm as 'n' gets larger is at most proportional to g(n).

Normally, even when people talk about O(g(n)) they actually mean Θ(g(n)) but technically, there is a difference.


More technically:

O(n) represents upper bound. Θ(n) means tight bound. Ω(n) represents lower bound.

f(x) = Θ(g(x)) if and only if f(x) = O(g(n)) and f(x) = Ω(g(x))

For example, an upper bound for the naive recursive approach to compute Fibonacci sequence is:

Fib(x) = O(2n)

but the tight bound is

Fib(x) = Θ(Fn) where Fn is the Fibonacci sequence.

which is also a valid upper bound.

Basically when we say an algorithm is of O(n), it's also O(n2), O(n1000000), O(2n), ... .

In fact, if we assume the set of functions g(x) that satisfy f(x) = O(g(x)):

S = { g | f(x) = O(g(x)) } => f(x) = Θ(min S)

There is also a small-oh and small-omega (ω) notations which mean loose upper and loose lower bounds.

To summarize:

f(x) = O(g(x)) (big-oh) means that the growth rate of f(x) is asymptotically less than or equal to to the growth rate of g(x).

f(x) = Ω(g(x)) (big-omega) means that the growth rate of f(x) is asymptotically greater than or equal to the growth rate of g(x)

f(x) = o(g(x)) (small-oh) means that the growth rate of f(x) is asymptotically less than to the growth rate of g(x).

f(x) = ω(g(x)) (small-omega) means that the growth rate of f(x) is asymptotically greater than the growth rate of g(x)

f(x) = Θ(g(x)) (theta) means that the growth rate of f(x) is asymptotically equal to the growth rate of g(x)

So we can reformulate f(x) = Θ(g(x)) as:

T = { g | f(x) = o(g(x)) }

f(x) = Θ(g(x)) if and only if "g" is a member of S - T.

I tried to avoid mathematical definition of these notations in this post (did I say I hate theoretical CS? :D). For a more detailed discussion, you can either use Google or read the classic (Introduction to Algorithms, CLRS).

Mehrdad Afshari
A little more information about the difference between the two would be useful
Simucal
I'm impressed. Your answer is simple, clear, concise, thorough, …and *completely* incomprehensible (to me). :-)
Ben Blank
Ben, you mean I should delete it or shouldn't ever try to teach anybody? lol
Mehrdad Afshari
No, don't. It's very good.
duffymo
Well researched, if a bit too many repetition.
You have very high thoughts of my grandmother, I'm sure she would have approved of you.
unwind
Very good post, except I don't know of a way to make this true: S = { g | f(x) = O(g(x)) } => f(x) = Θ(min S). What does the "min" of a set of functions mean? Point-wise minimum makes the statement false, and I don't think any alternative works.
A. Rex
thanks for that! I would put the Grandma's explanation on top of the explanation :-)
martinus
A. Rex: f <= g if for all x => f(x) <= g(x). Why does this make things incorrect? Am I missing something? My intuition says it's true but I'm not a math guru. Please explain if something looks wrong.
Mehrdad Afshari
There won't be a min. If you pick any g as the min of S={g | f = O(g)}, then g/2 is also in S but is smaller. On the other hand, if you define f < g as f = o(g), maybe it'll kinda work ... hmm ...
A. Rex
A. Rex: Yeah, mathematically you are right. But it's O dude. If we ignore the constant factor, things will get right. From that statement I mostly wanted to describe that theta is kinda the lowest upper bound for f(x).
Mehrdad Afshari
To make things correct, I can redefine the comparison function for the minimum as "f is less than g, if for all positive constants c, there exists k that for n > k we have `cf(n) < g(n)`".
Mehrdad Afshari
You just defined little-o. =) Anyway, I can now see how that statement can be, literally, mathematically correct (if "min" means "minimal" rather than "minimum", because you still run into the problem of incomparable functions). Thanks for the discussion.
A. Rex
@Mehrdad Lucky you ;)
Robert Gould
+11  A: 

one is Big "O"

one is Big Theta

http://en.wikipedia.org/wiki/Big_O_notation

Big O means your algorithm will execute in no more steps than in given expression(n^2)

Big Omega means your algorithm will execute in no fewer steps than in the given expression(n^2)

When both condition are true for the same expression, you can use the big theta notation....

l_39217_l
This actually makes sense to me.
mabwi
But is wrong! The number of steps is bounded above by n^2 as n gets very large.However, an algorithm that runs in n^2 + c steps takes more than n^2 steps, but is still O(n^2). Big-O notation only describes *asymptotic* beahviour.
HenryR
This is not a end all be all definition. It's just a launching point.... Since we are talking about asymptotic notations as n approaches infinity. The constant C becomes a non factor.
l_39217_l
+1  A: 

That character is Theta, by the way. I think this wikipedia article will explain big-O, big-theta, and others quite well...

rmeador
+3  A: 

f(n) belongs to O(n) if exists positive k as f(n)<=k*n

f(n) belongs to Θ(n) if exists positive k1, k2 as k1*n<=f(n)<=k2*n

http://en.wikipedia.org/wiki/Big_O_notation

You missed a crucial point - these are true only for all n > n1, i.e. asymptotically.
HenryR
+11  A: 

There's a simple way (a trick, I guess) to remember which notation means what.

All of the Big-O notations can be considered to have a bar.

When looking at a Ω, the bar is at the bottom, so it is an (asymptotic) lower bound.

When looking at a Θ, the bar is obviously in the middle. So it is an (asymptotic) tight bound.

When handwriting O, you usually finish at the top, and draw a squiggle. Therefore O(n) is the upper bound of the function. To be fair, this one doesn't work with most fonts, but it is the original justification of the names.

Andrei Krotkov
A: 

nice discussion guys i like it to much.. think about this problem if i have an array A[1....n] means includes n elements, where A intergers and i have an algorithem that test each element in A and check in the element is even leave it as is and if it is an odd it multiplies it by 2 . the question q1-which one of the big-O or Big-theta more approprate to measure the number of multiplication and why? q2-which one of the big-O or Big-theta more approprate to measure the number of element test ?

:) i hope you will give me a hand

Mfms
This should probably be asked as a separate question rather than posted as an answer. But I will give it a try as I'm revisiting Big-O for interviews: The upper bound where all elements are odd is O(2n) == O(n). The lower bound where all elements are even is Omega(n). Since the O and Omega equations are equal, you can say the algorithm is Theta(n).
skypecakes