views:

471

answers:

8

Hi All,

I would have quite general question. Have you ever had to really compute(e.g on the paper) complexity of an algorithm except at school as a programmer? And if.. can you give me an example please.

thank you :)

A: 

I don't know if you consider programming contests as school or not, but to see if you can solve a contest problem (with the specified problem size constraints) in the time limit, you have to roughly estimate the number of operations by considering the complexity of the algorithm used.

Mehrdad Afshari
+4  A: 

If you're writing a piece of software and you can think of multiple ways to implement it, often one of the deciding factors (in addition to conceptual complexity and time to implement) is going to be the algorithmic complexity. Thus, when your boss wants justification for your decision, figuring out the complexity of each is necessary. While some may consider this a form of premature optimization, I think the consensus is that choosing a design appropriate for your problem is just good software engineering.

rmeador
+2  A: 

At work, we casually discuss varying algorithms to solve a problem and complexity plays its role. It's never something where we have to do rigorous proofs of complexity, but just a general "we could do X, but that'd be O(N^2) which is too much because we could be iterating over millions of rows."

Over-optimization can lead to bad code, but knowing the complexity of your basic algorithm goes a long way in determining the best way to solve a programming problem.

Daniel Lew
A: 

Of course! Anytime, you're doing something over a million times, you might want to check your algorithm.

One instance I did was when I was generating millions of images that had to fit in a grid pattern. Sorry -- can't get much more specific for that one.

Eric Wendelin
+3  A: 

Absolutely - we just recently had an issue with our app where it all of a sudden started hitting some major slowdowns. It turned out we had a cubic (O(n^3)) algorithm right in the middle of a very major function. It had been hidden away beneath layers of abstraction. Figuring out what had happened required mapping out the function call graph and looking at the details.

Admittedly once we had done that, it's not like I had to apply any math to notice a O(n^3) algorithm, but that's mostly because 3 years of Analysis of Algorithms in university has given me a general feel for what a cubic algorithm looks like.

Anyway, it turns out that N had increased just a little bit, but it right at a cusp of going from taking a few hundred milliseconds to taking a couple seconds and then up into minutes - so the issue hadn't shown up until just recently.

For the most part, you're going to be using pre-packaged algorithms that have defined complexities. Quicksort is O(n^2) worst case and O(n*log(n)) average case, binary search is O(log(n)), etc.. libraries will generally specify what their performance characteristics are, and you only need to worry about how they compose.

Eclipse
A: 

Yes.

Generally complexity is obvious enough for napkin guesses, and that's fine for development until I get to the point where I need to measure performance. In many cases the section I was worried about is fine (ie, the napkin guess was good enough) and something else is slowing the software down. In almost all cases it pays to go with my base assumptions and measure performance later.

However, when I'm doing very time critical code, especially in graphics rendering, I do sit down and determine algorithm complexity and trade-offs associated with doing it in an alternate manner.

Today I'm working with someone else code and it's horrendously slow. I don't really care - it can take 10 minutes for all it matters to the overall process it enhances. But I had to look at the code to fix a bug, and this person has loops within loops within loops, most of them search the same list the same way for a different element each time. In essence he's changed a nice arrayfunction, say func(i){return records[i];} into a horrible search routine:

func(i)
{
   for each index in records
      if i==index return records[index]
   next
}

The reality is much worse, but you get the idea.

The reason you're learning this in school now is so you can see these structures and automatically categorize them. You likely won't need to actually computer or reduce it to a nice neat complexity number, but if you don't do it now by hand and see a lot of it, you too will be producing code like that and you'll simply have no clue.

Adam Davis
A: 

I don't know that I actually write things down, but all the time I evaluate how I'm constructing my algorithms to see if I can improve the efficiency of it. For code I ask myself if there is a way to turn nested loops into a single loop or from a loop to using a divide and conquer approach. This would be equivalent to going from O(N2) to O(N) and O(N) to O(log2N). Same is true of SQL -- can I remove a join and do a subquery instead using an index -- perhaps going from O(N2) to O(N) (or even O(1) if it enables me to indexed queries on both tables).

tvanfosson
do you have an example of that SQL you mention? 'cause joins with indexes and subqueries are roughly equivalent (often one is implemented using the other)
Javier
I wasn't really thinking of a particular example, just the general process.
tvanfosson
With SQL it might be as simple as noticing that adding an index would allow it to use a different algorithm that would be more efficient.
tvanfosson
A: 

Yes and no.

I was writing a tool to flatten RPM dependencies of a set of packages into a single chain. The obvious solution ran too slowly, so I dug through my memory a bit to remember a O(n+m) algorithm from graph theory class. I did a little back-of-the-envelope calculation to make sure that it really was O(n+m), then wrote it and put it into production :)

ephemient