views:

362

answers:

12

Consider two algorithms A and B which solve the same problem, and have time complexities (in terms of the number of elementary operations they perform) given respectively by

a(n) = 9n+6

b(n) = 2(n^2)+1

(i) Which algorithm is the best asymptotically?

(ii) Which is the best for small input sizes n, and for what values of n is this the case? (You may assume where necessary that n>0.)

i think its 9n+6. guys could you please help me with whether its right or wrong?? and whats the answer for part b. what exactly do they want?

+1  A: 

You should probably start by familiarizing yourself with asymptotics, big O notation, and the like. Asymptotically, a will be better. Why? Because it can be proven that for sufficiently large N, a(n) < b(n) for n> N.

Proof left as an exercise for the reader.

Rob Lachlan
A: 

Heuristically time with builtin shell $time is one measure.

time ./a.out < /live/memory/var/cache/man/whatis | sort | awk {'print $1'} | uniq -c | sort -rn > file.txt

Some cases you may prove both are optimal and must consider implementation.

LarsOn
Did you even read the question?
Derrick Turk
+1  A: 

By looking at the constants it's easy to see that a will be larger than b at the beginning. By looking at the occurences of n (n in a, n^2 in b), you can see that b is larger asymptotically. So we only need to figure out from which point on b is larger than a. To do that we just need to solve the equation a(n) = b(n) for n.

sepp2k
+1  A: 

9n + 6 IS the best.

Take this example, if your n is 10, then

9n + 6 = 96 2(n^2) + 1 = 201

now, take n is 100

9n + 6 = 906 2(n^2) + 1 = 20001

and it goes on and on...

if n = 4 then

9n + 6 = 40 2(n^2) + 1 = 33

Conclusion, the second one is better if n <= 4, but worst with 5 or more.

BTW, when calculating complexity of an algorithm, we usually end up dropping factor and constants because they do not affect the speed diference by much, so it should be simplify as a(n) = n and b(n) = n^2, which gives you a clear answer.

David Brunelle
actually, identical performance where n=5
spender
right...thank so much....so for a, b(n) is better and for part b, n<=4?
Lopa
Thing is, you usually don't decide what algoritm to use one small number of n, but on infinitly large n instead. When you have really small n, there is not much advantage to use the best based on number since small number usually mean real quick for any algorithm.
David Brunelle
Since this is homework, I think part of the goal is to demonstrate that an asymptotically 'better' algorithm might actually be worse for certain concrete input sizes. For algorithms where the cut-off size is in the thousands, or for people working in (near) real-time projects this is far from an academic question.
Lars
+1  A: 

Simple graphing software will show that 9n+6 will perform better quite quickly as will simple algebra. At sets of 5 or more, 9n+6 will be faster.

Robert Davis
actually, identical performance where n=5
spender
+7  A: 

I think plotting those functions would be very helpful to understand what's going on.

Michael Haren
Nice link!!!!!!
Rev316
This is awesome...because as you extend these plots out it suddenly becomes obvious what is really going on.
Beska
+1  A: 

Asymptotically, O(n) is better (cheaper) than O(n^2).

For small values of n, this is a simple algebra problem:

Find 'n' for which 9n+6=2(n^2)+1: cleaning it up, we get the 2nd grade equation 2(n^2)-9n-5=0. This yields n=5, which means that, for n=5, both processes would have the same cost:

  • 9n+6 => n:5 => 9*5+6 = 45+6 = 51
  • 2(n^2)+1 => n:5 => 2(5*5)+1 = 2*25+1 = 50+1 = 51

This means that B is better for n<5, they are equal for n=5, and A is better for n>5. If you expect n to be smaller than 5 in the vast majority of cases, then B may be a better choice, but it will only be relevant if the algorithm is used a lot. If you get implemented it as a function, the minor benefits of B pale against the call overhead, so they won't be unnoticeable.

In summary, unless you are very sure of what you're up to, go ahead with A. In general, you always want the algorithm with better (cheaper) asymptotic cost. Only when you have the same generic order, or reliable knowledge about the input data you may be getting, deeper insight is worth the effort, and even then the best approach is benchmarking both versions with realistic data than theoretical analysis.

herenvardo
+12  A: 

Which algorithm is the best asymptotically?

To answer this question, you just need to take a look at the exponents of n in both functions: Asymptotically, n2 will grow faster than n. So A ∈ O(n) is asymptotically the better choice than B ∈ O(n2).

Which is the best for small input sizes n, and for what values of n is this the case? (You may assume where necessary that n>0.)

To answer this question, you need to find the point of intersection where both functions have the same value. And for n=5 both functions evaluate to 51 (see 9n+6=2(n^2)+1 on Wolfram Alpha). And since A(4)=42 and B(4)=33, B is the better choice for n < 5.

Gumbo
i dont think so this answer is right gumbo
Lopa
i am sorry gumbo...my mistake... u are right
Lopa
Strictly speaking you cannot say "Which is the best for small input sizes" because the question gives no information about what the 'elementary operations' are in each case, nor how long each of those operations takes. Performance optimization without measurement is for the most part a pointless exercise and best discouraged IMHO. The 'best' algorithm for small input sizes will be the simplest one that's easiest to maintain!
Hightechrider
@Hightechrider: The Big O notation is not about performance.
Gumbo
@Gumbo, precisely, neither is 'best', they just have 'different' time complexities and at low values of n that doesn't convey any useful information about either algorithm.
Hightechrider
A: 

Asymptotically, a(n) is better since it's O(n) as opposed to O(n2). Here is a plot the running time as a function of n. As you can see, a(n) is only slower for small values of n.

Jacob
A: 

What I would do is run the algorithm several times for the cases you described (and on computers with different processors) and get the time when you start and the time you finish. Then look at the difference between their times for each of your cases.

John
A: 

it's obvious, if we have: * 9n+6 => n:5 => 9*5+6 = 45+6 = 51 * 2(n^2)+1 => n:5 => 2(5*5)+1 = 2*25+1 = 50+1 = 51 then 9n+6 is much better.

Masoud iLDEREMi
A: 

The answer to the first part is obviously (a) because O(n*n) > O(n).

The answer to the second part is that you cannot say "Which is the best for small input sizes" because you give no information about what the 'elementary operations' are in each case, nor how long each of those operations takes. The compiler or the CPU could apply some optimization which makes the so-called slower algorithm perform MUCH faster than the 'better' algorithm at small input sizes.

And by 'small' input sizes that could mean 10, 100 or 1 million+! The cross over between a clever O(n) algorithm and a dumb O(n*n) algorithm can be huge because compilers and CPUs are great at running dumb code really fast.

Many people make the mistake of 'optimizing' their code on the basis of O() without considering the size of the input, nor testing how well each performs with the data sets they will be using.

Of course that doesn't mean you should always right dumb code when there is a much better algorithm, or data structure that can do it in O(n) time, but it does mean you should think carefully before investing work to create a much cleverer (but much harder to maintain) algorithm that you think is optimal because it has a better O().

Hightechrider