views:

474

answers:

5

I was reading about the subset-sums problem when I came up with what appears to be a general-purpose algorithm for solving it:

(defun subset-contains-sum (set sum)
    (let ((subsets) (new-subset) (new-sum))
        (dolist (element set)
            (dolist (subset-sum subsets)
                (setf new-subset (cons element (car subset-sum)))
                (setf new-sum (+ element (cdr subset-sum)))
                (if (= new-sum sum)
                    (return-from subset-contains-sum new-subset))
                (setf subsets (cons (cons new-subset new-sum) subsets)))
            (setf subsets (cons (cons element element) subsets)))))

"set" is a list not containing duplicates and "sum" is the sum to search subsets for. "subsets" is a list of cons cells where the "car" is a subset list and the "cdr" is the sum of that subset. New subsets are created from old ones in O(1) time by just cons'ing the element to the front.

I am not sure what the runtime complexity of it is, but appears that with each element "sum" grows by, the size of "subsets" doubles, plus one, so it appears to me to at least be quadratic.

I am posting this because my impression before was that NP-complete problems tend to be intractable and that the best one can usually hope for is a heuristic, but this appears to be a general-purpose solution that will, assuming you have the CPU cycles, always give you the correct answer. How many other NP-complete problems can be solved like this one?

+6  A: 

NP-complete problems are solvable, just not in polynomial time (as far as we know). That is, an NP-complete problem may have an O(n*2^n) algorithm that could solve it, but it won't have, for example, an O(n^3) algorithm to solve it.

Interestingly, if a quick (polynomial) algorithm was found for any NP-complete problem, then every problem in NP could be solved in polynomial time. This is what P=NP is about.

If I understand your algorithm correctly (and this is based more on your comments than on the code), then it is equivalent to the O(n*2^n) algorithm here. There are 2^n subsets, and since you also need to sum each subset, the algorithm is O(n*2^n).

One more thing about complexity - the O(whatever) only indicates how well a particular algorithm scales. You cannot compare two algorithms and say that one is faster than the other based on this. Big-O notation doesn't care about implementation details and optimisations - it is possible to write two implementations of the same algorithm with one being much faster than the other, even though they might both be O(n^2). One woman making babies is an O(n) operation, but the chances are that this is going to take a lot longer than most O(n*log(n)) sorts you perform. All you can say based on this is that sorting will be slower for very large values on n.

David Johnstone
It is an exhaustive search of all subsets until one with the right sum is found. The list of subsets and the subsets themselves are all linked lists, so they can be created from one another and added to "subsets" in O(1) time.
G.E.M.
Correction: "if a quick (polynomial) algorithm was found for any NP-complete problem, then every NP-complete problem could be solved in polynomial time" should read "then *every* problem in NP could be solved in polynomial time".
Porges
@Porges - thanks for the correction :-)
David Johnstone
+5  A: 

All of the NP-complete problems have solutions. As long as you're willing to spend the time to compute the answer, that is. Just because there's not an efficient algorithm, doesn't mean there isn't one. For example, you could just iterate over every potential solution, and you'll eventually get one. These problems are used all over the place in real-world computing. You just need to be careful about how a big a problem you set for yourself if you're going to need exponential time (or worse!) to solve it.

Carl Norum
Yes, my solution is basically an exhaustive search of all possible subsets of "set" until one with the right sum is found, so I guess that isn't close to being an efficient algorithm.
G.E.M.
+3  A: 

I am not sure what the runtime complexity of it is, but appears that with each element "sum" grows by, the size of "subsets" doubles, plus one, so it appears to me to at least be quadratic.

If the run-time doubles for each increase in N, you're looking at an O(2^N) algorithm. That's also what I'd expect from visiting all subsets of a set (or all members of the powerset of a set), as that's exactly 2^N members (if you include rhe empty set).

The fact that adding or not adding an element to all hitherto-seen sets is fast doesn't mean that the total processing is fast.

Vatine
+2  A: 
huaiyuan
A: 

It's karpreducible to polynomial time. Reduce with Karp reduction to a decision problem O(nM) using a heap or binary search upper bounds is log(M*2^M)=logM+log(2^M)=logM+Mlog2 Ergo Time:O(nM)

LarsOn