views:

301

answers:

4

This great SO answer points to a good sparse solver for Ax=b, but I've got constraints on x such that each element in x is >=0 an <=N.

Also, A is huge (around 2e6x2e6) but very sparse with <=4 elements per row.

Any ideas/recommendations? I'm looking for something like MATLAB's lsqlin but with huge sparse matrices.

I'm essentially trying to solve the large scale bounded variable least squares problem on sparse matrices:

alt text

EDIT: In CVX:

cvx_begin
    variable x(n)
    minimize( norm(A*x-b) );
    subject to 
        x <= N;
        x >= 0;
cvx_end
+1  A: 

Your matrix A^T A is positive semi-definite, so your problem is convex; be sure to take advantage of that when setting up your solver.

Most go-to QP solvers are in Fortran and/or non-free; however I've heard good things about OOQP (http://www.mcs.anl.gov/research/projects/otc/Tools/OOQP/OoqpRequestForm.html), though it's a bit of a pain getting a copy.

Yes, I've tried to solve the quadratic form as well (`quadprog`) but it's too big. I'll take a look at OOQP and hope it works for sparse matrices.
Jacob
+1  A: 

Your problem is similar to a nonnegative least-squares problem (NNLS), which can be formulated as

$$\min_x ||Ax-b||_2^2 \text{ subject to } x \ge 0$$,

for which there seems to exist many algorithms.

Actually, you problem can be more or less converted into an NNLS problem, if, in addition to your original nonnegative variables $x$ you create additional variables $x'$ and link them with linear constraints $x_i+x_i'=N$. The problem with this approach is that these additional linear constraints might not be satisfied exactly in the least-squares solution - it might be appropriate then to try to weight them with a large number.

Fanfan
Could you expand on this?
Jacob
The NNSL problem, which is close but not exactly your problem, can for example be solved using the MATLAB routine lsqnonneg.mActually the Optimization Toolbox features the lsqlin.m routine which can solve your problem exactly, and even features a "large-scale algorihm" ; maybe you could test it on your matrix ?
Fanfan
Other toolboxes worth trying: TSNNLS (Cantarella et al) for NNLS and SLS (by Adlers et al) for your problem
Fanfan
+1  A: 

You are trying to solve least squares with box constraints. Standard sparse least squares algorithms include LSQR and more recently, LSMR. These only require you to apply matrix-vector products. To add in the constraints, realize that if you are in the interior of the box (none of the constraints are "active"), then you proceed with whatever interior point method you chose. For all active constraints, the next iteration you perform will either deactivate the constraint, or constrain you to move along the constraint hyperplane. With some (conceptually relatively simple) suitable modifications to the algorithm you choose, you can implement these constraints.

Generally however, you can use any convex optimization package. I have personally solved this exact type of problem using the Matlab package CVX, which uses SDPT3/SeDuMi for a backend. CVX is merely a very convenient wrapper around these semidefinite program solvers.

Victor Liu
Yes, I realize it's convex. I've tried CVX but it runs out of memory. Since CVX has been recommended I've posted my CVX solution.
Jacob
How large were your matrices?
Jacob
I used small dense matrices (about 100x100). You are using the sparse matrices in Matlab right? I believe that you should not run out of memory if you use spconvert() to construct the matrix.
Victor Liu
Yes but some MATLAB functions does not support sparse matrices. CVX also seems to fail for such matrices as well.
Jacob
A: 

How about CVXOPT? It works with sparse matrices, and it seems that some of the cone programming solvers may help:

http://abel.ee.ucla.edu/cvxopt/userguide/coneprog.html#quadratic-cone-programs

This is a simple modification of the code in the doc above, to solve your problem:

from cvxopt import matrix, solvers
A = matrix([ [ .3, -.4,  -.2,  -.4,  1.3 ],
             [ .6, 1.2, -1.7,   .3,  -.3 ],
             [-.3,  .0,   .6, -1.2, -2.0 ] ])
b = matrix([ 1.5, .0, -1.2, -.7, .0])
N = 2;
m, n = A.size
I = matrix(0.0, (n,n))
I[::n+1] = 1.0
G = matrix([-I, I])
h = matrix(n*[0.0]  + n*[N])
print G
print h
dims = {'l': n, 'q': [n+1], 's': []}
x = solvers.coneqp(A.T*A, -A.T*b, G, h)['x']#, dims)['x']
print x

CVXOPT support sparse matrices, so it be useful for you.

Alejandro