views:

241

answers:

4
+7  Q: 

Order of growth

for

f = n(log(n))^5
g = n^1.01

is

f = O(g)
f = 0(g)
f = Omega(g)?

I tried dividing both by n and i got

f = log(n)^5
g = n^0.01

But I am still clueless to which one grows faster. Can someone help me with this and explain the reasoning to the answer? I really want to know how (without calculator) one can determine which one grows faster.

+7  A: 

Probably easiest to compare their logarithmic profiles:

If (for some C1, C2, a>0)

f < C1 n log(n)^a
g < C2 n^(1+k)

Then (for large enough n)

log(f) < log(n) + a log(log(n)) + log(C1)
log(g) < log(n) + k log(n) + log(C2)

Both are dominated by log(n) growth, so the question is which residual is bigger. The log(n) residual grows faster than log(log(n)), regardless of how small k or how large a is, so g would grow faster than f.

So in terms of big-O notation: g grows faster than f, so f can (asymptotically) be bounded from above by a function like g:

f(n) < C3 g(n)

So f = O(g). Similarly, g can be bounded from below by f, so g = Omega(f). But f cannot be bounded from below by a function like g, since g will eventually outgrow it. So f != Omega(g) and f != Theta(g).

But aaa makes a very good point: g does not begin to dominate over f until n becomes obscenely large.

I don't have a lot of experience with algorithm scaling, so corrections are welcome.

Marshall Ward
+2  A: 

how about checking their intersections?

Solve[Log[n] == n^(0.01/5), n]

                                       1809
{{n -> 2.72374}, {n -> 8.70811861815 10    }}

I cheated with Mathematica

you can also reason with derivatives,

In[71]:= D[Log[n], n]

1
-
n
In[72]:= D[n^(0.01/5), n]

0.002
------
 0.998
n

consider what happens as n gets really large, change in first tends to zero, later function doesnt lose its derivative (exponent is greater than 0).

this tells you which is more complex theoretically. however in the practical region, first function is going to grow faster.

aaa
yea thats great but i wouldnt have that on the test =(
denniss
@den oftentimes curves will have "different" growth in extremeties. quick and dirty way is just to take few sample point and determine growth from their sampled values.
aaa
i agree and thats what i would do too if i had the resource but often times we are expected to answer these questions without the help of any electronics.
denniss
@den see my update
aaa
+3  A: 

I would break this up into several easy, reusable lemmas:

Lemma 1: For a positive constant k, f = O(g) if and only if f = O(k g).

Proof: Suppose f = O(g). Then there exist constants c and N such that |f(n)| < c |g(n)| for n > N. Thus |f(n)| < (c/k) (k |g(n)| ) for n > N and constant (c/k), so f = O (k g). The converse is trivially similar.


Lemma 2: If h is a positive monotonically increasing function and f and g are positive for sufficiently large n, then f = O(g) if and only if h(f) = O( h(g) ).

Proof: Suppose f = O(g). Then there exist constants c and N such that |f(n)| < c |g(n)| for n > N. Since f and g are positive for n > M, f(n) < c g(n) for n > max(N, M). Since h is monotonically increasing, h(f(n)) < c h(g(n)) for n > max(N, M), and lastly |h(f(n))| < c |h(g(n))| for n > max(N, M) since h is positive. Thus h(f) = O(h(g)). The converse follows similarly; the key fact being that if h is monotonically increasing, then h(a) < h(b) => a < b.


Lemma 3: If h is an invertible monotonically increasing function, then f = O(g) if and only if f(h) + O(g(h)).

Proof: Suppose f = O(g). Then there exist constants c, N such that |f(n)| < c |g(n)| for n > N. Thus |f(h(n))| < c |g(h(n))| for h(n) > N. Since h(n) is invertible and monotonically increasing, h(n) > N whenever n > h^-1(N). Thus h^-1(N) is the new constant we need, and f(h(n)) = O(g(h(n)). The converse follows similarly, using g's inverse.


Lemma 4: If h(n) is nonzero for n > M, f = O(g) if and only if f(n)h(n) = O(g(n)h(n)).

Proof: If f = O(g), then for constants c, N, |f(n)| < c |g(n)| for n > N. Since |h(n)| is positive for n > M, |f(n)h(n)| < c |g(n)h(n)| for n > max(N, M) and so f(n)h(n) = O(g(n)h(n)). The converse follows similarly by using 1/h(n).


Lemma 5a: log n = O(n).

Proof: Let f = log n, g = n. Then f' = 1/n and g' = 1, so for n > 1, g increases more quickly than f. Moreover g(1) = 1 > 0 = f(1), so |f(n)| < |g(n)| for n > 1 and f = O(g).

Lemma 5b: n != O(log n).

Proof: Suppose otherwise for contradiction, and let f = n and g = log n. Then for some constants c, N, |n| < c |log n| for n > N.

Let d = max(2, 2c, sqrt(N+1) ). By the calculation in lemma 5a, since d > 2 > 1, log d < d. Thus |f(2d^2)| = 2d^2 > 2d(log d) >= d log d + d log 2 = d (log 2d) > 2c log 2d > c log (2d^2) = c g(2d^2) = c |g(2d^2)| for 2d^2 > N, a contradiction. Thus f != O(g).


So now we can assemble the answer to the question you originally asked.

Step 1:

log n = O(n^a)
n^a != O(log n)

For any positive constant a.

Proof: log n = O(n) by Lemma 5a. Thus log n = 1/a log n^a = O(1/a n^a) = O(n^a) by Lemmas 3 (for h(n) = n^a), 4, and 1. The second fact follows similarly by using Lemma 5b.

Step 2:

log^5 n = O(n^0.01)
n^0.01 != O(log^5 n)

Proof: log n = O(n^0.002) by step 1. Then by Lemma 2 (with h(n) = n^5), log^5 n = O( (n^0.002)^5 ) = O(n^0.01). The second fact follows similarly.

Final answer:

n log^5 n = O(n^1.01)
n^1.01 != O(n log^5 n)

In other words,

f = O(g)
f != 0(g)
f != Omega(g)

Proof: Apply Lemma 4 (using h(n) = n) to step 2.

With practice these rules become "obvious" and second nature. and unless your test requires that you prove your answer you'll find yourself whipping through these kinds of big-O problems.

n = O(log n) means that n can be bounded by a log(n), but surely that can't be true since n grows faster than log(n). The use of O() seems to be the opposite of how I've used it. Am I misunderstanding the notation?
Marshall Ward
No, I just goofed when I wrote up that second part of the post. Should be fixed now.
A: 

This is not 100% mathematically kosher without proving something about logs, but here he goes:

f = log(n)^5
g = n^0.01

We take logs of both:

log(f) = log(log(n)^5)) = 5*log(log(n)) = O(log(log(n)))
log(g) = log(n^0.01) = 0.01*log(n) = O(log(n))

From this we see that the first one grows asymptotically slower, because it has a double log in it and logs grow slowly. An non-formal argument why this reasoning by taking logs is valid is this: log(n) tells you roughly how many digits there are in the number n. So if the number of digits of g is growing asymptotically faster than the number of digits of f, then surely the actual number g is growing faster than the number f!

Jules