views:

556

answers:

11

Hi,

I'm no professional programmer and I don't study it. I'm an aerospace student and did a numeric method for my diploma thesis and also coded a program to prove that it works.

I did several methods and implemented several algorithms and tried to show the proofs why different situations needed their own algorithm to solve the task.

I did this proof with a mathematical approach, but some algorithm was so specific that I do know what they do and they do it right, but it was very hard to find a mathematical function or something to show how many iterations or loops it has to do until it finishes.

So, I would like to know how you do this comparison. Do you also present a mathematical function, or do you just do a speedtest of both algorithms, and if you do it mathematically, how do you do that? Do you learn this during your university studies, or how?

Thank you in advance, Andreas

+16  A: 

The standard way of comparing different algorithms is by comparing their complexity using Big O notation. In practice you would of course also benchmark the algorithms.

As an example the sorting algorithms bubble sort and heap sort has complexity O(n2) and O(n log n) respective.

As a final note it's very hard to construct representative benchmarks, see this interesting post from Christer Ericsson on the subject.

Andreas Brinck
agreed, just to add a bit, the benchmarks should various datasets to show how it reacts under different situations.
Jay
Somehow I downvoted this answer, not intentionally, and SO won't let me undo it. Strange.
Mike Dunlavey
You cannot benchmark an algorithm. You can just benchmark an implementation of an algorithm.
Gumbo
Hi, but how specific is "big O" calculated? that these algos are often exponnential is for sure, but how do you say, that one is progressing faster than another, when you can't really set up a function for it?
Andreas Hornig
@Gumbo One definition of an algorithm is a sequence of operations that can be simulated by a Turing-complete system. This means that for instance all C programs fulfill the definition of an algorithm, in this case the implementation *is* the algorithm.
Andreas Brinck
+4  A: 

Firstly one would need to define what more efficient means, does it mean quicker, uses less system resources (such as memory) etc... (these factors are sometimes mutually exclusive)

In terms of standard definitions of efficiency one would often utilize Big-0 Notation, however in the "real world" outside academia normally one would profile/benchmark both equations and then compare the results

It's often difficult to make general assumptions about Big-0 notation as this is primarily concerned with looping and assumes a fixed cost for the code within a loop so benchmarking would be the better way to go

One caveat to watch out for is that sometimes the result can vary significantly based on the dataset size you're working with - for small N in a loop one will sometimes not find much difference

saret
+1 for mentioning there usually is a trade off between performance and other resources like memory.
extraneon
+1  A: 

Hi Andreas

Running speed tests is not going to provide you with as good quality an answer as mathematics will. I think your outline approach is correct -- but perhaps your experience and breadth of knowledge let you down when analysing on of your algorithms. I recommend the book 'Concrete Mathematics' by Knuth and others, but there are a lot of other good (and even more not good) books covering the topic of analysing algorithms. Yes, I learned this during my university studies.

Having written all that, most algoritmic complexity is analysed in terms of worst-case execution time (so called big-O) and it is possible that your data sets do not approach worst-cases, in which case the speed tests you run may illuminate your actual performance rather than the algorithm's theoretical performance. So tests are not without their value. I'd say, though, that the value is secondary to that of the mathematics, which shouldn't cause you any undue headaches.

Regards

Mark

High Performance Mark
A: 

This is usually expressed with big O notation. Basically you pick a simple funtcion (like n2 where n is the number of elements) that dominates the actual number of iterations.

Sionide21
A: 

That depends. At the university you do learn to compare algorithms by calculating the number of operations it executes depending on the size / value of its arguments. (Compare analysis of algorithms and big O notation). I would require of every decent programmer to at least understand the basics of that.

However in practice this is useful only for small algorithms, or small parts of larger algorithms. You will have trouble to calculate this for, say, a parsing algorithm for an XML Document. But knowing the basics often keeps you from making braindead errors - see, for instance, Joel Spolskys amusing blog-entry "Back to the Basics".

If you have a larger system you will usually either compare algorithms by educated guessing, making time measurements, or find the troublesome spots in your system by using a profiling tool. In my experience this is rarely that important - fighting to reduce the complexity of the system helps more.

hstoerr
*"You will have trouble to calculate this for, say, a parsing algorithm for an XML Document."* Oh, I dunno. It might be hard to *prove*, but any decent parser will be O(n) on ordinary documents, and I'm pretty sure for XML it's possible to achieve O(n) expected time for all documents. The only real complexity comes when there's a DTD, and even there I don't see any real issues.
Jason Orendorff
DTDs are equivalent to context-free grammars, which are recognizable O(n^3).
Eric Mickelsen
A: 

You might get off easy when there is a significant difference in the asymptotic Big-O complexity class for the worst case or for the expected case. Even then you'll need to show that the hidden constant factors don't make the "better" (from the asymptotic perspective) algorithm slower for reasonably sized inputs.

If difference isn't large, then given the complexity of todays computers, benchmarking with various datasets is the only correct way. You cannot even begin to take into account all of the convoluted interplay that comes from branch prediction accuracy, data and code cache hit rates, lock contention and so on.

Ants Aasma
A: 

As others have pointed out rightfully a common way is to use the Big O-notation.

But, the Big O is only good as long as you consider processing performance of algorithms that are clearly defined and scoped (such as a bubble sort).

It's when other hardware resources or other running software running in parallell comes into play that the part called engineering kicks in. The hardware has its constraints. Memory and disk are limited resources. Disk performance even depend on the mechanics involved.

An operating system scheduler will for instance differentiate on I/O bound and CPU bound resources to improve the total performance for a given application. A DBMS will take into account disk reads and writes, memory and CPU usage and even networking in the case of clusters.

These things are hard to prove mathematically but are often easily benchmarked against a set of usage patterns.

So I guess the answer is that developers both use theoretical methods such as Big O and benchmarking to determine the speed of algorithms and its implementations.

Christian Vik
+5  A: 

While big-O notation can provide you with a way of distinguishing an awful algorithm from a reasonable algorithm, it only tells you about a particular definition of computational complexity. In the real world, this won't actually allow you to choose between two algorithms, since:

1) Two algorithms at the same order of complexity, let's call them f and g, both with O(N^2) complexity might differ in runtime by several orders of magnitude. Big-O notation does not measure the number of individual steps associated with each iteration, so f might take 100 steps while g takes 10.

In addition, different compilers or programming languages might generate more or less instructions for each iteration of the algorithm, and subtle choices in the description of the algorithm can make cache or CPU hardware perform 10s to 1000s of times worse, without changing either the big-O order, or the number of steps!

2) An O(N) algorithm might outperform an O(log(N)) algorithm

Big-O notation does not measure the number of individual steps associated with each iteration, so if O(N) takes 100 steps, but O(log(N)) takes 1000 steps for each iteration, then for data sets up to a certain size O(N) will be better.

The same issues apply to compilers as above.


The solution is to do an initial mathematical analysis of Big-O notation, followed by a benchmark-driven performance tuning cycle, using time and hardware performance counter data, as well as a good dollop of experience.

Alex Brown
O(log(N)) is a lower complexity than O(N). I think you meant O(N log(N)).
Michael Madsen
@MichaelMadsen thanks, fixed.
Alex Brown
A: 

Assuming speed (not memory) is your primary concern, and assuming you want an empirical (not theoretical) way to compare algorithms, I would suggest you prepare several datasets differing in size by a wide margin, like 3 orders of magnitude. Then run each algorithm against every dataset, clock them, and plot the results. The shape of each algorithm's time vs. dataset size curve will give a good idea of its big-O performance.

Now, if the size of your datasets in practice are pretty well known, an algorithm with better big-O performance is not necessarily faster. To determine which algorithm is faster for a given dataset size, you need to tune each one's performance until it is "as fast as possible" and then see which one wins. Performance tuning requires profiling, or single-stepping at the instruction level, or my favorite technique, stackshots.

Mike Dunlavey
A: 

To answer your question: " Do you also present a mathematical function, or do you just do a speedtest of both algorithms."

Yes to both - let's summarize.

The "Big O" method discussed above refers to the worst case performance as Mark mentioned above. The "speedtest" you mention would be a way to estimate "average case performance". In practice, there may be a BIG difference between worst case performance and average case performance. This is why your question is interesting and useful.

Worst case performance has been the classical way of defining and classifying algorithm performance. More recently, research has been more concerned with average case performance or more precisely performance bounds like: 99% of the problems will require less than N operations. You can imagine why the second case is far more practical for most problems.

Depending on the application, you might have very different requirements. One application may require response time to be less than 3 seconds 95% of the time - this would lead to defining performance bounds. Another might require performance to NEVER exceed 5 seconds - this would lead to analyzing worst case performance.

In both cases this is taught at the university or grad school level. Anyone developing new algorithms used in real-time applications should learn about the difference between average and worst case performance and should also be prepared to develop simulations and analysis of algorithm performance as part of an implementation process.

Hope this helps.

Grembo
A: 

Big O notation give you the complexity of an algoritm in the worst case, and is mainly usefull to know how the algoritm will grow in execution time when the ammount of data that have to proccess grow up. For example (C-style syntax, this is not important):

List<int> ls = new List<int>();           (1) O(1)
for (int i = 0; i < ls.count; i++)        (2) O(1)                                     
   foo(i);                                (3) O(log n) (efficient function)

Cost analysis:
    (1)  cost: O(1), constant cost
    (2)  cost: O(1), (int i = 0;)
               O(1), (i < ls.count)
               O(1), (i++)
               ----  total: O(1) (constant cost), but it repeats n times (ls.count)
    (3)  cost: O(log n) (assume it, just an example), 
                        but it repeats n times as it is inside the loop

So, in asymptotic notation, it will have a cost of: O(n log n) (not as efficient) wich in this example is a reasonable result, but take this example:

List<int> ls = new List<int>();           (1) O(1)
for (int i = 0; i < ls.count; i++)        (2) O(1)                                     
  if ( (i mod 2) == 0) )                  (*) O(1)  (constant cost)
    foo(i);                               (3) O(log n)

Same algorithm but with a little new line with a condition. In this case asymptotic notation will chose the worst case and will conclude same results as above O(n log n), when is easily detectable that the (3) step will execute only half the times.

Data an so are only examples and may not be exact, just trying to illustrate the behaviour of the Big O notation. It mainly gives you the behaviour of your algoritm when data grow up (you algoritm will be linear, exponencial, logarithmical, ...), but this is not what everybody knows as "efficiency", or almost, this is not the only "efficiency" meaning.

However, this methot can detect "impossible of process" (sorry, don't know the exact english word) algoritms, this is, algoritms that will need a gigantic amount of time to be processed in its early steps (think in factorials, for example, or very big matix).

If you want a real world efficiency study, may be you prefere catching up some real world data and doing a real world benchmark of the beaviour of you algoritm with this data. It is not a mathematical style, but it will be more precise in the majority of cases (but not in the worst case! ;) ).

Hope this helps.

j.a.estevan