views:

676

answers:

3

I'm having trouble distinguishing between these these two. Can someone please explain the differences to me? I'm a little slow, so examples would probably help.

Thanks

+20  A: 

f = O(g) says, essentially

For at least one choice of a constant k > 0, you can find a constant a such that the inequality f(x) < k g(x) holds for all x > a.

f = o(g) says, essentially

For every choice of a constant k > 0, you can find a constant a such that the inequality f(x) < k g(x) holds for all x > a.

In Big-O, it is only necessary that you find a particular multiplier k for which the inequality holds beyond some minimum x.

In Little-o, it must be that there is a minimum x after which the inequality holds no matter how small you make k, as long as it is not negative or zero.

These both describe upper bounds, although somewhat counter-intuitively, Little-o is the stronger statement. There is a much larger gap between the growth rates of f and g if f = o(g) than if f = O(g).

One illustration of the disparity is this: f = O(f) is true, but f = o(f) is false. Therefore, Big-O can be read as "f = O(g) means that f's asymptotic growth is no faster than g's", whereas "f = o(g) means that f's asymptotic growth is strictly slower than g's". It's like <= versus <.

More specifically, if the value of g(x) is a constant multiple of the value of f(x), then f = O(g) is true. This is why you can drop constants when working with big-O notation.

However, for f = o(g) to be true, then g must include a higher power of x in its formula, and so the relative separation between f(x) and g(x) must actually get larger as x gets larger.

To use purely math examples (rather than referring to algorithms):

The following are true for Big-O, but would not be true if you used little-o:

  • x^2 = O(x^2)
  • x^2 = O(x^2 + x)
  • x^2 = O(200 * x^2)

The following are true for little-o:

  • x^2 = o(x^3)
  • x^2 = o(x!)
  • ln(x) = o(x)

Note that if f = o(g), this implies f = O(g). e.g. x^2 = o(x^3) so it is also true that x^2 = O(x^3), (again, think of O as <= and o as <)

Tyler McHenry
Yes-- the difference is in whether the two functions may be asymptotically the same. Intuitively I like to think of big-O meaning "grows no faster than" (i.e. grows at the same rate or slower) and little-o meaning "grows strictly slower than".
Phil
@Phil Good wording. I worked that into my answer.
Tyler McHenry
+4  A: 

Big-O is to little-o as <= is to <. little-o is a strict upper bound while Big-O is an inclusive upper bound.

For example, a function which grows linearly is O(n^2), o(n^2) and O(n). But it is not o(n), O(lg n) or o(lg n). Similarly, 1 is <= 2, < 2 and <= 1. But 1 is not < 1, <= 0 or < 0.

f = o(g): f grows* slower than g

f = O(g): f does not grow* faster than g

*: asymptotically

Strilanc
+2  A: 

I find that when I can't conceptually grasp something, thinking about why one would use X is helpful to understand X. (Not to say you haven't tried that, I'm just setting the stage.)

[stuff you know]A common way to classify algorithms is by runtime, and by citing the big-Oh complexity of an algorithm, you can get a pretty good estimation of which one is "better" -- whichever has the "smallest" function in the O! Even in the real world, O(N) is "better" than O(N^2), barring silly things like super-massive constants and the like.[/stuff you know]

Let's say there's some algorithm that runs in O(N). Pretty good, huh? But let's say you (you brilliant person, you) come up with an algorithm that runs in O(N/loglogloglogN). YAY! Its faster! But you'd feel silly writing that over and over again when you're writing your thesis. So you write it once, and you can say "In this paper, I have proven that algorithm X, previously computable in time O(N), is in fact computable in o(n)."

Thus, everyone knows that your algorithm is faster --- by how much is unclear, but they know its faster. Theoretically. :)

Agor