views:

1102

answers:

11

It is said that true mastery of a subject is achieved when you can explain it simply. Unfortunately as an Electronics Engineer I lack some of the more formal aspects of computer science.

Taking into consideration that there is some background in math, how would you explain computational complexity theory to the naïve? Where to start to get into this very important topic of CS? I understand some of the concepts involved, but I lack a general overview that allows me to drop into the details.

Edit: Big O notation is clear to me. What I am looking more is an explanation of what is this P = NP question. What is a problem P? What is NP? what is a NP-Hard?

Edit 2: Some really great answers! now a lot of things make more sense. The problem is that sometimes wikipedia is written as if the reader already understood all concepts involved. Now with this quick overview many of these articles make a lot more sense

A: 

The simplest answer wikipedia

Gilles
I have actually found most CS wikipedia articles to be horribly written. However you can get some good links in the refrence/citation section.
James McMahon
+5  A: 

Unfortunately, the best two books I am aware of (Garey and Johnson and Hopcroft and Ullman) both start at the level of graduate proof-oriented mathematics. This is almost certainly necessary, as the whole issue is very easy to misunderstand or mischaracterize. Jeff nearly got his ears chewed off when he attempted to approach the matter in too folksy/jokey a tone.

Perhaps the best way is to simply do a lot of hands-on work with big-O notation using lots of examples and exercises. See also this answer. Note, however, that this is not quite the same thing: individual algorithms can be described by asymptotes, but saying that a problem is of a certain complexity is a statement about every possible algorithm for it. This is why the proofs are so complicated!

jonrock
I actually like Jeff's write up for NP complete.
James McMahon
@James McMahon: It's a fun post, but it also happens to (a) not explain what NP-complete is at all (b) say things that are startlingly unimaginably wrong, like "Nobody can define what makes a problem NP-complete, exactly"! When this was pointed out, he added an update mumbling something about the P=NP problem, when in fact the only reason we can talk of the P=NP problem is because we *can* define what an NP-complete problem is. If you read the comments, he says further scary rubbish like "Apparently we can't *prove* anything is NP-complete"(!).
ShreevatsaR
Even his only description of NP-completeness — that it's "fancy computer science jargon shorthand for "incredibly hard" — is wrong: there exist problems arbitrarily harder than NP-complete. It's clear he doesn't understand either the class NP, or completeness, let alone NP-completeness, and hasn't read even the first chapter of the book he recommends (which he admits). Really, if you read his comments on the second page, it's very mysterious whether he's failing to understand the idea, or intentionally trolling.
ShreevatsaR
+2  A: 

I remember "Computational Complexity" from Papadimitriou (I hope I spelled the name right) as a good book

flolo
Thank you - you saved me looking it up. I remember that as being a great book, if a little heavy on the maths.
Unsliced
+1  A: 

very much simplified: A problem is NP-hard if the only way to solve it is by enumerating all possible answers and checking each one.

mfx
Wrong. If we knew that "the only way to solve SAT is by enumerating all possible answers", we would have proved P≠NP already.
ShreevatsaR
There are better ways of exactly-solving many NP-Hard problems than the trial-and-error approach you describe. P != NP, only implies that there are no polynomial-time algorithms.
David Nehme
@shreevatsa: Oh please, this is a *simplification*. Correctly spoken, NP-hard meens "you need a nondeterministic polynomial-time algorithm"; but since "nondeterministic" just means "the machine guesses the correct answer", that's about it.
mfx
@david: sure there are better ways (heuristics) for most NP-hard problems; but these heurstics will only work for a subset of these problems, never for all of them -- otherwise you would have reducted the problem to something easier than NP.
mfx
+4  A: 

Michael Sipser's Introduction to the Theory of Computation is a great book, and is very readable. Another great resource is Scott Aaronson's Great Ideas in Theoretical Computer Science course.

The formalism that is used is to look at decision problems (problems with a Yes/No answer, e.g. "does this graph have a Hamiltonian cycle") as "languages" -- sets of strings -- inputs for which the answer is Yes. There is a formal notion of what a "computer" is (Turing machine), and a problem is in P if there is a polynomial time algorithm for deciding that problem (given an input string, say Yes or No) on a Turing machine.

A problem is in NP if it is checkable in polynomial time, i.e. if, for inputs where the answer is Yes, there is a (polynomial-size) certificate given which you can check that the answer is Yes in polynomial time. [E.g. given a Hamiltonian cycle as certificate, you can obviously check that it is one.]

It doesn't say anything about how to find that certificate. Obviously, you can try "all possible certificates" but that can take exponential time; it is not clear whether you will always have to take more than polynomial time to decide Yes or No; this is the P vs NP question.

A problem is NP-hard if being able to solve that problem means being able to solve all problems in NP.

Also see this question: http://stackoverflow.com/questions/210829/what-is-an-np-complete-problem

But really, all these are probably only vague to you; it is worth taking the time to read e.g. Sipser's book. It is a beautiful theory.

ShreevatsaR
A: 

My simplified answer would be: "Computational complexity is the analysis of how much harder a problem becomes when you add more elements."

In that sentence, the word "harder" is deliberately vague because it could refer either to processing time or to memory usage.

benjismith
A: 

In computer science it is not enough to be able to solve a problem. It has to be solvable in a reasonable amount of time. So while in pure mathematics you come up with an equation, in CS you have to refine that equation so you can solve a problem in reasonable time.

That is the simplest way I can think to put it, that may be too simple for your purposes.

James McMahon
+18  A: 

Hoooo, doctoral comp flashback. Okay, here goes.

We start with the idea of a decision problem, a problem for which an algorithm can always answer "yes" or "no." We also need the idea of two models of computer (Turing machine, really): deterministic and non-deterministic. A deterministic computer is the regular computer we always thinking of; a non-deterministic computer is one that is just like we're used to except that is has unlimited parallelism, so that any time you come to a branch, you spawn a new "process" and examine both sides. Like Yogi Berra said, when you come to a fork in the road, you should take it.

A decision problem is in P if there is a known polynomial-time algorithm to get that answer. A decision problem is in NP if there is a known polynomial-time algorithm for a non-deterministic machine to get the answer.

Problems known to be in P are trivially in NP --- the nondeterministic machine just never troubles itself to fork another process, and acts just like a deterministic one. There are problems that are known to be neither in P nor NP; a simple example is to enumerate all the bit vectors of length n. No matter what, that takes 2n steps.

(Strictly, a decision problem is in NP if a nodeterministic machine can arrive at an answer in poly-time, and a deterministic machine can verify that the solution is correct in poly time.)

But there are some problems which are known to be in NP for which no poly-time deterministic algorithm is known; in other words, we know they're in NP, but don't know if they're in P. The traditional example is the decision-problem version of the Traveling Salesman Problem (decision-TSP): given the cities and distances, is there a route that covers all the cities, returning to the starting point, in less than x distance? It's easy in a nondeterministic machine, because every time the nondeterministic traveling salesman comes to a fork in the road, he takes it: his clones head on to the next city they haven't visited, and at the end they compare notes and see if any of the clones took less than x distance.

(Then, the exponentially many clones get to fight it out for which ones must be killed.)

It's not known whether decision-TSP is in P: there's no known poly-time solution, but there's no proof such a solution doesn't exist.

Now, one more concept: given decision problems P and Q, if an algorithm can transform a solution for P into a solution for Q in polynomial time, it's said that Q is poly-time reducible (or just reducible) to P.

A problem is NP-complete if you can prove that (1) it's in NP, and (2) show that it's poly-time reducible to a problem already known to be NP-complete. (The hard part of that was provie the first example of an NP-complete problem: that was done by Steve Cook in Cook's Theorem.)

So really, what it says is that if anyone ever finds a poly-time solution to one NP-complete problem, they've automatically got one for all the NP-complete problems; that will also mean that P=NP.

A problem is NP-hard if and only if it's "at least as" hard as an NP-complete problem. The more conventional Traveling Salesman Problem of finding the shortest route is NP-hard, not strictly NP-complete.

Charlie Martin
Wow, great answer! I've always kinda-sorta understood the concept of NP, but this explanation really cleared up a lot for me -- everything makes more sense now!
mipadi
Thanks! Finally I understood correctly what the hell is a non deterministic touring machine with your cloning example :D
Mario Ortegón
A: 

Depending on how long you have, maybe it would be best to start at DFA, NDFA, and then show that they are equivalent. Then they understand ND vs. D, and will understand regular expressions a lot better as a nice side effect.

John the Statistician
+1  A: 

Here are a few links on the subject:

In you are familiar with the idea of set cardinality, that is the number of elements in a set, then one could view the question like P representing the cardinality of Integer numbers while NP is a mystery: Is it the same or is it larger like the cardinality of all Real numbers?

JB King
+1  A: 

This is a comment on Charlie's post.

A problem is NP-complete if you can prove that (1) it's in NP, and (2) show that it's poly-time reducible to a problem already known to be NP-complete.

There is a subtle error with the second condition. Actually, what you need to prove is that a known NP-complete problem (say Y) is polynomial-time reducible to this problem (let's call it problem X).

The reasoning behind this manner of proof is that if you could reduce an NP-Complete problem to this problem and somehow manage to solve this problem in poly-time, then you've also succeeded in finding a poly-time solution to an NP-complete problem, which would be a remarkable (if not impossible thing) as you'll have succeeded to resolve the long-standing P = NP problem.

Another way to look at this proof is consider it as using the the contra-positive proof technique, which essentially states that if Y --> X, then ~X --> ~Y. In other words, not being able to solve Y in polynomial time isn't possible means not being to solve X in poly-time either. On the other hand, if you could solve X in poly-time, then you could solve Y in poly-time as well. Further, you could solve all problems that reduce to Y in poly-time as well by transitivity.

I hope my explanation above is clear enough. A good source is Chap 8 of Algorithm Design by Kleinberg and Tardos or Chap 34 of Cormen et al.

Regards,

Raheem