Sometimes I see Θ(n) with the strange Θ symbol with something in the middle of it, and sometimes just O(n). Is it just laziness of typing because nobody knows how to type this symbol, or does it mean something different?
To explain it to your grandma:
If an algorithm is of
Θ(g(n))
, it means that the running time of the algorithm as'n'
(input size) gets larger is proportional tog(n)
.If an algorithm is of
O(g(n))
, it means that the running time of the algorithm as'n'
gets larger is at most proportional tog(n)
.
Normally, even when people talk about O(g(n))
they actually mean Θ(g(n))
but technically, there is a difference.
More technically:
O(n)
represents upper bound. Θ(n)
means tight bound.
Ω(n)
represents lower bound.
f(x) = Θ(g(x)) if and only if f(x) = O(g(n)) and f(x) = Ω(g(x))
For example, an upper bound for the naive recursive approach to compute Fibonacci sequence is:
Fib(x) = O(2
n)
but the tight bound is
Fib(x) = Θ(F
n)
whereF
n
is the Fibonacci sequence.
which is also a valid upper bound.
Basically when we say an algorithm is of O(n)
, it's also O(n
2
)
, O(n
1000000
)
, O(2
n
)
, ... .
In fact, if we assume the set of functions g(x)
that satisfy f(x) = O(g(x))
:
S
= { g | f(x) = O(g(x)) } => f(x) = Θ(
min
S
)
There is also a small-oh and small-omega (ω
) notations which mean loose upper and loose lower bounds.
To summarize:
f(x) = O(g(x))
(big-oh) means that the growth rate off(x)
is asymptotically less than or equal to to the growth rate ofg(x)
.
f(x) = Ω(g(x))
(big-omega) means that the growth rate off(x)
is asymptotically greater than or equal to the growth rate ofg(x)
f(x) = o(g(x))
(small-oh) means that the growth rate off(x)
is asymptotically less than to the growth rate ofg(x)
.
f(x) = ω(g(x))
(small-omega) means that the growth rate off(x)
is asymptotically greater than the growth rate ofg(x)
f(x) = Θ(g(x))
(theta) means that the growth rate off(x)
is asymptotically equal to the growth rate ofg(x)
So we can reformulate f(x) = Θ(g(x))
as:
T
= { g | f(x) = o(g(x)) }
f(x) = Θ(g(x)) if and only if "g" is a member of
S - T
.
I tried to avoid mathematical definition of these notations in this post (did I say I hate theoretical CS? :D). For a more detailed discussion, you can either use Google or read the classic (Introduction to Algorithms, CLRS).
one is Big "O"
one is Big Theta
http://en.wikipedia.org/wiki/Big_O_notation
Big O means your algorithm will execute in no more steps than in given expression(n^2)
Big Omega means your algorithm will execute in no fewer steps than in the given expression(n^2)
When both condition are true for the same expression, you can use the big theta notation....
That character is Theta, by the way. I think this wikipedia article will explain big-O, big-theta, and others quite well...
f(n) belongs to O(n) if exists positive k as f(n)<=k*n
f(n) belongs to Θ(n) if exists positive k1, k2 as k1*n<=f(n)<=k2*n
There's a simple way (a trick, I guess) to remember which notation means what.
All of the Big-O notations can be considered to have a bar.
When looking at a Ω, the bar is at the bottom, so it is an (asymptotic) lower bound.
When looking at a Θ, the bar is obviously in the middle. So it is an (asymptotic) tight bound.
When handwriting O, you usually finish at the top, and draw a squiggle. Therefore O(n) is the upper bound of the function. To be fair, this one doesn't work with most fonts, but it is the original justification of the names.
nice discussion guys i like it to much.. think about this problem if i have an array A[1....n] means includes n elements, where A intergers and i have an algorithem that test each element in A and check in the element is even leave it as is and if it is an odd it multiplies it by 2 . the question q1-which one of the big-O or Big-theta more approprate to measure the number of multiplication and why? q2-which one of the big-O or Big-theta more approprate to measure the number of element test ?
:) i hope you will give me a hand