views:

1516

answers:

29

What algorithm taught you the most about programming or a specific language feature?

We have all had those moments where all of a sudden we know, just know, we have learned an important lesson for the future based on finally understanding an algorithm written by a programmer a couple of steps up the evolutionary ladder. Whose ideas and code had the magic touch on you?

+8  A: 

Quicksort. It showed me that recursion can be powerful and useful.

mk
+1  A: 

This is a slow one :)

I learned lots about both C and computers in general by understanding Duffs Device and XOR swaps

EDIT:

@Jason Z, that's my XOR swap :) cool isn't it.

sparkes
+9  A: 

"To iterate is human, to recurse divine" - quoted in 1989 at college.

P.S. Posted by Woodgnome while waiting for invite to join

Annan
+2  A: 

I don't know if this qualifies as an algorithm, or just a classic hack. In either case, it helped to get me to start thinking outside the box.

Swap 2 integers without using an intermediate variable (in C++)

void InPlaceSwap (int& a, int &b) {
     a ^= b;
     b ^= a;
     a ^= b;
}
Jason Z
Just out of sheer curiosity, is there any reason to do things that way instead of using std::swap?
Jason Baker
In very low memory environments (e.g. - embedded systems), this can save the allocation of additional memory. Kinda weak, I know. Like I said, the emphasis was on thinking outside the box. At first glance you would think it is impossible to swap without an intermediate variable.
Jason Z
This code would fail if it is passed the same variable in both parameters. Not as unlikely as it sounds!
Dean Svendsen
A: 

For me, the simple swap in Kelly & Pohl's A Book on C to demonstrate call-by-reference flipped me out when I first saw it. I looked at that, and pointers snapped into place. Verbatim. . .

void swap(int *p, int *q)
{
   int temp;

   temp = *p;
   *p = *q;
   *q = temp;
}
Baltimark
A: 

The Towers of Hanoi algorithm is one of the most beautiful algorithms. It shows how you can use recursion to solve a problem in a much more elegant fashion than the iterative method.

Alternatively, the recursion algorithm for Fibonacci series and calculating powers of a number demonstrate the reverse situation of recursive algorithm being used for the sake of recursion instead of providing good value.

Krishna Kumar
+19  A: 

General algorithms:

  • Quicksort (and it's average complexity analysis), shows that randomizing your input can be a good thing!;
  • balanced trees (AVL trees for example), a neat way to balance search/insertion costs;
  • Dijkstra and Ford-Fulkerson algorithms on graphs (I like the fact that the second one has many applications);
  • the LZ* family of compression algorithms (LZW for example), data compression sounded kind of magic to me until I discovered it (a long time ago :) );
  • the FFT, ubiquitous (re-used in so many other algorithms);
  • the simplex algorithm, ubiquitous as well.

Numerical related:

  • Euclid's algorithm to compute the gcd of two integers: one of the first algorithms, simple and elegant, powerful, has lots of generalizations;
  • fast multiplication of integers (Cooley-Tukey for example);
  • Newton iterations to invert / find a root, a very powerful meta-algorithm.

Number theory-related:

  • AGM-related algorithms (examples): leads to very simple and elegant algorithms to compute pi (and much more!), though the theory is quite profound (Gauss introduced elliptic functions and modular forms from it, so you can say that it gave birth to algebraic geometry...);
  • the number field sieve (for integer factorization): very complicated, but quite a nice theoretical result (this also goes for the AKS algorithm, which proved that PRIMES is in P).

I also enjoyed studying quantum computing (Shor and Deutsch-Josza algorithms for example): this teaches you to think out of the box.

As you can see, I'm a bit biased towards maths-oriented algorithms :)

OysterD
A: 

For some reason Bubble Sort has always stood out to me. Not because it's elegant or good just because it had/has a goofy name I suppose.

Owen
A: 

The iterative algorithm for Fibonacci, because for me it nailed down the fact that the most elegant code (in this case, the recursive version) is not necessarily the most efficient.

To elaborate- The "fib(10) = fib(9) + fib(8)" approach means that fib(9) will be evaluated to fib(8) + fib(7). So evaluation of fib(8) (and therefor fib7, fib6) will all be evaluated twice.

The iterative method, (curr = prev1 + prev2 in a forloop) does not tree out this way, nor does it take as much memory since it's only 3 transient variables, instead of n frames in the recursion stack.

I tend to strive for simple, elegant code when I'm programming, but this is the algorithm that helped me realize that this isn't the end-all-be-all for writing good software, and that ultimately the end users don't care how your code looks.

callingshotgun
+6  A: 

This one might sound trivial but it was a revelation for me at the time. I was in my very first programming class(VB6) and the Prof had just taught us about random numbers and he gave the following instructions: "Create a virtual lottery machine. Imagine a glass ball full of 100 ping pong balls marked 0 to 99. Pick them randomly and display their number until they have all been selected, no duplicates."

Everyone else wrote their program like this: Pick a ball, put its number into an "already selected list" and then pick another ball. Check to see if its already selected, if so pick another ball, if not put its number on the "already selected list" etc....

Of course by the end they were making hundreds of comparisons to find the few balls that had not already been picked. It was like throwing the balls back into the jar after selecting them. My revelation was to throw balls away after picking.

I know this sounds mind-numbingly obvious but this was the moment that the "programming switch" got flipped in my head. This was the moment that programming went from trying to learn a strange foreign language to trying to figure out an enjoyable puzzle. And once I made that mental connection between programming and fun there was really no stopping me.

Autodidact
I had that exercise in school as well. My solution was to shuffle the balls by swapping the value in each index of an array with the value at a random index, then print out the array values.
Dour High Arch
@Dour: I did the same, but we were wrong - http://www.codinghorror.com/blog/archives/001015.html :)
JoeBloggs
+1  A: 

Quicksort: Until I got to college, I had never questioned whether brute force Bubble Sort was the most efficient way to sort. It just seemed intuitively obvious. But being exposed to non-obvious solutions like Quicksort taught me to look past the obvious solutions to see if something better is available.

Bruce Alderman
A: 

The iterative algorithm for Fibonacci, because for me it nailed down the fact that the most elegant code (in this case, the recursive version) is not necessarily the most efficient.

The iterative method, (curr = prev1 + prev2 in a forloop) does not tree out this way, nor does it take as much memory since it's only 3 transient variables, instead of n frames in the recursion stack.

You know that fibonacci has a closed form solution that allows direct computation of the result in a fixed number of steps, right? Namely, (phin - (1 - phi)n) / sqrt(5). It always strikes me as somewhat remarkable that this should yield an integer, but it does.

phi is the golden ratio, of course; (1 + sqrt(5)) / 2.

DrPizza
+1  A: 

Minimax taught me that chess programs aren't smart, they can just think more moves ahead than you can.

Maudite
A: 

@Krishna Kumar

The bitwise solution is even more fun than the recursive solution.

MSN

Mat Noguchi
+3  A: 

Huffman coding would be mine, I had originally made my own dumb version by minimizing the number of bits to encode text from 8 down to less, but had not thought about variable number of bits depending on frequency. Then I found the huffman coding described in an article in a magazine and it opened up lots of new possibilities.

Lasse V. Karlsen
+2  A: 

For some reason I like the Schwartzian transform

@sorted = map  { $_->[0] }
          sort { $a->[1] cmp $b->[1] }
          map  { [$_, foo($_)] }
                @unsorted;

Where foo($) represents a compute-intensive expression that takes $ (each item of the list in turn) and produces the corresponding value that is to be compared in its sake.

Pat
Ok, I *think* this could be interesting. The language you're using is a bit obscure though :/. Could you provide a bit of commentary?
Edmund
+3  A: 

Quicksort in Haskell:

qsort []     = []
qsort (x:xs) = qsort (filter (< x) xs) ++ [x] ++ qsort (filter (>= x) xs)

Although I couldn'd write Haskell at the time, I did understand this code and with it recursion and the quicksort algorithm. It just made click and there it was...

Alexander Stolz
Yes, I also remember that one - very cool; it is not an inplace algorithm, but anyway...
blabla999
A: 

You know that fibonacci has a closed form solution that allows direct computation of the result in a fixed number of steps, right? Namely, (phin - (1 - phi)n) / sqrt(5). It always strikes me as somewhat remarkable that this should yield an integer, but it does. phi is the golden ratio, of course; (1 + sqrt(5)) / 2.

Honestly didn't know that. Thanks for the tip:P

callingshotgun
+5  A: 

Bresenham's line drawing algorithm got me interested in realtime graphics rendering. This can be used to render filled polygons, like triangles, for things like 3D model rendering.

spoulson
+4  A: 

Recursive Descent Parsing - I remember being very impressed how such simple code could do something so seemingly complex.

Henry
+1  A: 

For me it's the weak-heapsort algorithm because it shows (1) how much a wise chosen data structure (and the algorithms working on it) can influence the performance and (2) that fascinating things can be discovered even in old, well-known things. (weak-heapsort is the best variant of all heap sorts, which was proven eight years later.)

DR
A: 

An algorithm that generates a list of primes by comparing each number to the current list of primes, adding it if it's not found, and returning the list of primes at the end. Mind-bending in several ways, not the least of which being the idea of using the partially-completed output as the primary search criteria.

J.T. Hurley
A: 

Storing two pointers in a single word for a doubly linked list tought me the lesson that you can do very bad things in C indeed (with which a conservative GC will have lots of trouble).

blabla999
A: 

The most proud I've been of a solution was writing something very similar to the DisplayTag package. It taught me a lot about code design, maintainability, and reuse. I wrote it well before DisplayTag, and it was sunk into an NDA agreement, so I couldn't open source it, but I can still speak gushingly about that one in job interviews.

Dean J
A: 

Not my favorite, but the Miller Rabin Algorithm for testing primality showed me that being right almost all the time, is good enough almost all the time. (i.e. Don't mistrust a probabilistic algorithm just because it has a probability of being wrong.)

sykora
A: 

Map/Reduce. Two simple concepts that fit together to make a load of data-processing tasks easier to parallelize.

Oh... and it's only the basis of massively-parallel indexing:

http://labs.google.com/papers/mapreduce.html

stusmith
A: 

I don't have a favourite -- there are so many beautiful ones to pick from -- but one I've always found intriguing is the Bailey–Borwein–Plouffe (BBP) formula, which enables you to calculate an arbitrary digit of pi without knowledge about the preceding digits.

csl
+1  A: 

Floyd-Warshall all-pairs shortest paths algorithm

procedure FloydWarshall ()
   for k := 1 to n
     for i := 1 to n
        for j := 1 to n
          path[i][j] = min ( path[i][j], path[i][k]+path[k][j] );

Here's why it's cool: when you first learn about the shortest-path problem in your graph theory course, you probably start with Dijkstra's algorithm that solves single-source shortest path. It's quite complicated at first, but then you get over it, and you fully understood it.

Then the teacher says "Now we want to solve the same problem but for ALL sources". You think to yourself, "Oh god, this is going to be a much harder problem! It's going to be at least N times more complicated than Dijkstra's algorithm!!!".

Then the teacher gives you Floyd-Warshall. And your mind explodes. Then you start to tear up at how beautifully simple the algorithm is. It's just a triply-nested loop. It only uses a simple array for its data structure.


The most eye-opening part for me is the following realization: say you have a solution for problem A. Then you have a bigger "superproblem" B which contains problem A. The solution to problem B may in fact be simpler than the solution to problem A.

polygenelubricants
A: 

RSA introduced me to the world of modular arithmetic, which can be used to solve a surprising number of interesting problems!

BlueRaja - Danny Pflughoeft