Hi,
I'm wondering what the difference between O(n^2) and O(n.log(n)) is?
Thanks~
Hi,
I'm wondering what the difference between O(n^2) and O(n.log(n)) is?
Thanks~
You'll need to be a bit more specific about what you are asking, but in this case O(n log(n))
is faster
http://www.cs.georgetown.edu/~alspaugh/cls/2009b-072/algorithmAnalysis.html
n log(n) grows significantly slower
Big O calculates an upper limit of running time relative to the size of a data set (n
).
An O(n*log(n))
is not always faster than a O(n^2
) algorithm, but when considering the worst case it probably is. A O(n^2)
-algorithm takes ~4 times longer when you duplicate the working set (worst case), for O(n*log(n))
-algorithm it's less. The bigger your data set is the more it usually gets faster using an O(n*log(n))
-algorithm.
EDIT: Thanks to 'harms', I'll correct a wrong statement in my first answer: I told that when considering the worst case O(n^2)
would always be slower than O(n*log(n))
, that's wrong since both are except for a constant factor!
Sample: Say we have the worst case and our data set has size 100.
O(n^2)
--> 100*100 = 10000O(n*log(n))
--> 100*2 = 200 (using log_10)The problem is that both can be multiplied by a constant factor, say we multiply c
to the latter one. The result will be:
O(n^2)
--> 100*100 = 10000O(n*log(n))
--> 100*2*c = 200*c (using log_10)So for c > 50
we get O(n*log(n)) > O(n^2), for n=100
.
I have to update my statement: For every problem, when considering the worst case, a O(n*log(n))
algorithm will be quicker than a O(n^2)
algorithm for arbitrarily big data sets.
The reason is: The choice of c
is arbitrary but constant. If you increase the data set large enough it will dominate the effect of every constant choice of c
and when discussing two algorithms the c
s for both are constant!
Algorithms that run in O(nlog(n)) time are faster than those that run in O(n^2).
Big-O defines the upper-bound on performance. As the size of the data set grows (n) the length of time it takes to perform the task. You might be interested in the iTunes U algorithms course from MIT.
"Big Oh" notation gives an estimated upper bound on the growth in the running time of an algorithm. If an algorithm is supposed to be O(n^2), in a naive way, it says that for n=1, it takes a max. time 1 units, for n=2 it takes max. time 4 units and so on. Similarly for O(n log(n)), it says the grown will be such that it obeys the upper bound of O(n log(n)). (If I am more than naive here, please correct me in a comment).
I hope that helps.