views:

111

answers:

4

How to determine if a given f(n) and g(n) is in theta, omega, big oh, little omega, or little oh? - I think one way of doing it is by plotting graphs of both functions f(n) and g(n). Even by plotting graphs how do we say when a f(n) is in theta, omega, big oh, little omega, or little oh? Am not clear on that. Can someone throw more details on that?

Also I am not sure if this is the correct way. Can anybody tell me if there is any other simpler way of doing this. Say example: f(n) = sq root(n) and g(n) = n^sin n

A: 

Here's how you compute Big-O given f(n) and g(n) using limits.

Big-Oh

Plotting the functions for large values of x of both f(x) and g(x) on the same set of axes should help you immediately identify the asymptotic behavior.

burkestar
I think you are referring to the L Hospitals rule here. And I think for Big -O , the condition they mention is lim x-> infinity f(x)/ g(x) = 0
That's the definition for *little* o. Big-O requires only that the limit is finite -- it could be larger than zero.
Tyler McHenry
But if I want to avoid those mathematical calculations, and I prefer plotting graphs , can I do it for all 5 asymptotic notations, theta, big o, omega, little o and little omega? If yes, how can I figure this from graph.I am assuming if I have the f(n) above g(n), then it should be omega and little omega. Similarly if f(n) is below g(n), then it is big oh and little o. Is that correct? Am not sure about theta though.
A: 

First, you need to know the complexity of the function you're calling. This may be documented, but in the likely case that it is not you'll have to look at the source of those functions.

In the simple case you can count nested loops to get an idea of how long things take. If you have a loop from 1 to n inside a loop from 1 to t, and the only functions called inside take constant time (or you just do arithmetic operations inside, say), that's Theta(nt). If you happen to know that t <= n, it's also O(n2).

In more complicated cases you may need to use the Master Theorem or the like. In some cases it can be very hard (research-level) to determine the complexity!

Charles
That's what was asked at all.
svick
@svick: No, I mean that if your function f() calls f1() and f2(), you need to know or determine the complexities of f1() and f2(). In thye second paragraph I describe in very brief detail the basics of determining complexity -- for a question this broad, that's just about all I can do.
Charles
I am actually working with mere functions here. I am aware of doing things from a loop , but my question here is calculating asymptotic notations from functions. Say for eg. f(n) = 5(n+1)^4 - n^2 g(n) = (n+5)^4 + (n+10)^2. For such complex functions how can we say if it in theta, big oh, omega, little o, little omega.
With word-sized input, they're all O(1). With multiprecision input (where the numbers may get arbitrarily large), this depends on what algorithms you use for addition and powering. If you use standard "schoolbook" methods, multiplying n and m (where n and m are positive integers) takes Theta(log m * log n) and adding takes Theta(log m + log n). (For squaring and doubling, Theta(log^2 n) and Theta(log n).) So your examples would be Theta(log^2 n) in that case. If you use more advanced algorithms, it can be faster for large inputs.
Charles
@Charles: He's not talking about the complexity of computing those functions, but about the complexity of the functions themselves.
svick
@svick: In that case, the complexity of f(n) is just Theta(f(n)). I suppose we're to assume that the OP wants to simplify this to a more common name?
Charles
We're to assume he wants to compare `f` and `g`: Is it `f(n) = O(g(n))`? Or the other way around? Or something else?
svick
+1  A: 

You don't determine asymptotic growth by eyeballing it. There are formal definitions for each type of asymptotic growth relation that explain exactly and precisely what they mean. You determine whether two functions fit a given relation by determining mathematically if they satisfy the formal definition for the relation.

For example, one of several equivalent formal definitions for Big-O is as follows:

f = O(g) if and only if lim [ n -> inf ] ( f(n) / g(n) ) < inf

So, for example, if f(n) = n, and g(n) = n^2, you can observe that the limit as n -> infinity of n/n^2 = 0, and so f = O(g).

Or, if f(n) = n and g(n) = 2n, you can observe that as n-> inf, n / 2n -> 1/2 < inf, so again f = O(g).

However, if f(n) = n and g(n) = sqrt(n), you can observe that the limit as n -> infinity of n / sqrt(n) = infinity, so f != O(g).

Your example f and g is tricky to because sine oscillates between -1 and 1 as n increases. But Wolfram Alpha tells me that the limit in question is infinity, so sqrt(n) != O(n^(sin(n)).

Each other asymptotic growth relation has a similar formal definition that you can test to see if two functions satisfy it with respect to each other.

Edit:

It seems like you're looking for rules of thumb. Here's a quick and easy way to determine asymptotic order for relatively simple functions f and g. Consider the highest power of n in each function:

  • If the highest power is the same, then f = O(g), g = O(f), f = Omega(g), g = Omega(f), f = Theta(g), g = Theta(f)
  • If the highest power in f is smaller than the highest power in g, then f = O(g), f = o(g), g = Omega(f), g = omega(f)
  • If the highest power in f is larger than the highest power in g, then f = Omega(g), f = omega(g), g = O(f), g = o(f)

Of course, if the functions are not polynomials in n, or are more complex and it's not easy to determine what the highest powers are (e.g. your example using sine), you will still have to revert back to the formal definition.

Some things that are always true:

  • Theta is by definition the conjunction of O and Omega. f = O(g) ^ f = Omega(g) <=> f = Theta(g) [where ^ represents logical "and"]
  • The "little" relations are stronger than the "big" relations. This means that f = o(g) => f = O(g) and f = omega(g) => f = Omega(g), but the reverse directions are not true.
Tyler McHenry
I already went through the formal defintions of all the 5 aymptotic fns and I completely understand the simpler functions as you described above , f(n)=n, g(n)=n^2. But what I dont understand is complex functions, like say 1) f(n) = 5(n+1)^4 - n^2 g(n) = (n+5)^4 + (n+10)^2 2) n(logn^2) , (n^2)log n 3) 1/sq root(n), 1/logn 4)(3^n+4)(n-17 * (n/17))+6 , 3^(n+3)
The first example is simple enough to use the rules of thumb that I gave. The highest power in f(n) is 4, and the highest power in g(n) is also 4, so f is O(g), Omega(g) and Theta(g). In the second and third examples, it is not obvious what the highest power involved is, so you will have to take the ratio of the two functions and compute their limit. Stop fighting against the inevitable: if you want to compute asymptotic growth relations on more complicated functions, you *will* have to compute limits. There's no way around it. If you don't know how to compute limits, find a Calculus text.
Tyler McHenry
What if I tweet the 1) above to this 1) f(n) = 5(n+1)^4.1 - n^2 g(n) = (n+5)^4 + (n+10)^2 and for the 4))(3^n+4)(n-17 * floor(n/17))+6 , 3^(n+3) it is clearly an exponential function. Also f(n) is in power n+4 and g(n) is in power n+3, so I think we can say that for any value of n f(n) will increases faster than g(n). Hence f(n)= wg(n) and Omega(g(n)). Do you agree?
A: 

Plotting a graph of the functions is theoretically not enough. (Although plotting a graph and then reasoning based on that might. Also, practically, it should be enough if you plot it for large enough values)

The correct way is to construct a proof:

The function f(n) is O(g(n)) iff there exist positive K and n0, such that f(n) < K × g(x) for every n > n0.

Let's assume this is true for f(n) = sqrt(n) and g(n) = n^sin n and some K and n0. Now consider a number n1 that

  1. is bigger than this n0
  2. is bigger than 1
  3. is bigger than
  4. for which the equation g(n1) = 1 holds

There is infinitely many numbers that satisfy those conditions: g(n1) = 1 when sin n1 = 0, that is n1 = a × π for any integer a. Limiting them to n1 > max(n0, 1, K²) still leaves infinitely many possibilities, so we pick one.

So we've got g(n1) = 1. What's the value of f(n1)? That's sqrt(n1), which is bigger than K (because n1 > K²).

Let's put it together: K < f(n1) < K × g(n1) = K, in other words K < K, which can't be true, that means the original assumption is wrong and so f(n) != O(g(n)).

svick
For big oh the condition should be f(n)<= k*g(n). So that way your proof should be to chk if sq root(n) <= k n^sinn----------------(1) now when we take n=n0=1, g(n0) becomes 1 . So substituting in (1) sq root(1) <= k*1. sq root(1) = 1, so this condition will hold if k=1. so I may say f(n)= O g(n).
It doesn't matter whether the is `<` or `<=`. And it has to apply for **all** `n > n0`, not just one, so your proof is wrong.
svick
ok I see what you are saying. However I dont think I clearly understand the proof explanation you have mentioned above for f(n)!=O(g(n)). Can you please elaborate your proof for why f(n)!=O(g(n))?
I'm sorry, there was a mistake in the proof: I forget to account for `K`. I also tried to make it clearer. Is it better now?
svick