tags:

views:

1536

answers:

7

Is there any good resource (book, reference, web site, application...) which explains how to compute time complexity of an algorithm?

Because, it is hard to make the things concrete in my mind. Sometimes it is talking about an iteration has time complexity of lg n; and then according to the another loop it becomes n.lg n; and sometimes they use big omega notation to express it, sometimes big-o and etc...

These things are quite abstract to me. So, I need a resource which has good explanation and a lot of examples to make me see what's goin on.

Hopefully, I explained my problem clearly. I am quite sure that everyone who just started to study algorithms has also the same problem with me. So, maybe those experienced guys can also share their experience with us. Thanks...

A: 

The classic set of books on the subject is Knuth's Art of Computer Programming series. They're heavy on theory and formal proofs, so brush up on your calculus before you tackle them.

Sherm Pendley
+7  A: 

I think the best book is Introduction to Algorithms by Cormen, Leiserson, Rivest and Stein.

S.Lott
The provided link does not work.
lmsasu
+2  A: 

I read Christos Papadimitriou's Computational Complexity in university and really enjoyed it. It's not an easy matter, the book is long but it's well written (i.e., understandable and suitable for self-teaching) and it contains lots of useful knowledge, much more than just the "how do I figure out time complexity of algorithm x".

Simon
+1  A: 

Computational complexity theory article in Wikipedia has a list of references, including a link to the book draft Computational Complexity: A Modern Approach, a textbook by Sanjeev Arora and Boaz Barak, Cambridge University Press.

gimel
+1  A: 

Guys, you're all recommending true complexity theory books -- Arora and Barak contains all sorts of things like PCP, Interactive Proofs, Quantum Computing and topics on Expander graphs -- things that most programmers/software developers have never heard of or will ever use. Papdimitriou is in the same category. Knuth is freaking impossible to read (and I was a CS/Math major) and gives zero intuition on how things work.

If you want a simple way to compute runtimes, and to get the flavour of the analysis, try the first chapter or so of Kleinberg and Tardos "Design and Analysis of Algorithms", that holds your hand through the fundamentals, and then you can work on much harder problems.

Ying Xiao
A: 

A course in Discrete Mathematics is sometimes given before Introduction to Algorithms.

hishadow
A: 

I agree that Introduction to Algorithms is a good book. For more detailed instructions on e.g. how to solve recurrence relations see Knuth's Concrete Mathematics. A good book on Computational Complexity itself is the one by Papadimitriou. But that last book might be a bit too thorough if you only want to calculate the complexity of given algorithms.

A short overview about big-O/-Omega notation:

  • If you can give an algorithm that solves a problem in time T(c*(n log n)) (c being a constant), than the time complexity of that problem is O(n log n). The big-O gets rid of the c, that is any constant factors not depending on the input size n. Big-O gives an upper bound on the running time, because you have shown (by giving an algorithm) that you can solve the problem in that amount of time.
  • If you can give a proof that any algorithm solving the problem takes at least time T(c*(n log n)) than the problem is in Omega(n log n). Big-Omega gives a lower bound on the problem. In most cases lower bounds are much harder to proof than upper bounds, because you have to show that any possible algorithm for the problem takes at least T(c*(n log n). Knowing a lower bound does not mean knowing an algorithm that reaches that lower bound.
  • If you have a lower bound, e.g. Omega(n log n), and an algorithm that solves the problem in that time (an upper bound), than the problem is in Theta(n log n). This means an "optimal" algorithm for this problem is known. Of course this notation hides the constant factor c which can be quite big (that's why I wrote optimal in quotes).

Note that when using these notations you are usually talking about the worst-case running time of an algorithm.

mdm