views:

1728

answers:

7

Hi, I'm trying to calculate the inverse matrix in Java.

I'm following the adjoint method (first calculation of the adjoint matrix, then transpose this matrix and finally, multiply it for the inverse of the value of the determinant).

It works when the matrix is not too big. I've checked that for matrixes up to a size of 12x12 the result is quickly provided. However, when the matrix is bigger than 12x12 the time it needs to complete the calculation increases exponentially.

The matrix I need to invert is 19x19, and it takes too much time. The method that more time consumes is the method used for the calculation of the determinant.

The code I'm using is:

public static double determinant(double [][] input)
{
int rows = nRows(input);        //number of rows in the matrix
int columns = nColumns(input); //number of columns in the matrix
double determinant = 0;

if ((rows== 1) && (columns == 1))
{
return input[0][0];
}

int sign = 1;

for (int column = 0; column < columns; column++)
{

double[][] submatrix = getSubmatrix(input, rows, columns,column);
determinant = determinant + sign*input[0][column]*determinant(submatrix);
sign*=-1;
}

return determinant;
}   

Does anybody know how to calculate the determinant of a large matrix more efficiently? If not, does anyone knows how to calcultate the inverse of a large matrix using other algorithm?

Thanks

+7  A: 

Exponentially? No, I believe matrix inversion is O(N^3).

I would recommend using LU decomposition to solve a matrix equation. You don't have to solve for the determinant when you use it.

Better yet, look into a package to help you. JAMA comes to mind.

12x12 or 19x19 are not large matricies. It's common to solve problems with tens or hundreds of thousands of degrees of freedom.

Here's a working example of how to use JAMA. You have to have the JAMA JAR in your CLASSPATH when you compile and run:

package linearalgebra;

import Jama.LUDecomposition;
import Jama.Matrix;

public class JamaDemo
{
    public static void main(String[] args)
    {
        double [][] values = {{1, 1, 2}, {2, 4, -3}, {3, 6, -5}};  // each array is a row in the matrix
        double [] rhs = { 9, 1, 0 }; // rhs vector
        double [] answer = { 1, 2, 3 }; // this is the answer that you should get.

        Matrix a = new Matrix(values);
        a.print(10, 2);
        LUDecomposition luDecomposition = new LUDecomposition(a);
        luDecomposition.getL().print(10, 2); // lower matrix
        luDecomposition.getU().print(10, 2); // upper matrix

        Matrix b = new Matrix(rhs, rhs.length);
        Matrix x = luDecomposition.solve(b); // solve Ax = b for the unknown vector x
        x.print(10, 2); // print the solution
        Matrix residual = a.times(x).minus(b); // calculate the residual error
        double rnorm = residual.normInf(); // get the max error (yes, it's very small)
        System.out.println("residual: " + rnorm);
    }
}

Here's the same problem solved using Apache Commons Math, per quant_dev's recommendation:

package linearalgebra;

import org.apache.commons.math.linear.Array2DRowRealMatrix;
import org.apache.commons.math.linear.ArrayRealVector;
import org.apache.commons.math.linear.DecompositionSolver;
import org.apache.commons.math.linear.LUDecompositionImpl;
import org.apache.commons.math.linear.RealMatrix;
import org.apache.commons.math.linear.RealVector;

public class LinearAlgebraDemo
{
    public static void main(String[] args)
    {
        double [][] values = {{1, 1, 2}, {2, 4, -3}, {3, 6, -5}};
        double [] rhs = { 9, 1, 0 };

        RealMatrix a = new Array2DRowRealMatrix(values);
        System.out.println("a matrix: " + a);
        DecompositionSolver solver = new LUDecompositionImpl(a).getSolver();

        RealVector b = new ArrayRealVector(rhs);
        RealVector x = solver.solve(b);
        System.out.println("solution x: " + x);;
        RealVector residual = a.operate(x).subtract(b);
        double rnorm = residual.getLInfNorm();
        System.out.println("residual: " + rnorm);
    }
}

Adapt these to your situation.

duffymo
thanks for your answer. You're right, not exponentially, what I mean is that from matrix size 12x12 the time it takes to compute the determinant increases spectacularly. I've tried Jama, but I don't get it to work (I'm pretty new in Java). I'll also look into LU descomposition. Thanks
dedalo
A lab I worked in a while back solved matrices with *billions* of degrees of freedom. On hundreds or thousands of processors, naturally.
notJim
I ran one problem for a whole calendar month. It was on a Unix workstation, before multiprocessors were common. We had to checkpoint the result once a week to make sure we didn't lose too much time if it crashed and had to be restarted.
duffymo
+2  A: 

Matrix inversion is computationally very intensive. As duffymo answered LU is a good algorithm, and there are other variants (QR, for instance).

Unfortunately you can't get rid of the heavy calculations... and maybe the bottelneck is the getSubmatrix method if you are not using an optimized library.

Also, special matrix structures (band-matricity, symmetry, diagonality, sparsity) have a great impact in performance if considered in the calculations. Your mileage may vary...

Pablo Rodriguez
+2  A: 

You NEVER want to compute an inverse matrix this way. Ok, computation of the inverse itself is to be avoided, as it is almost always better to use a factorization such as an LU.

Computation of the determinant using recursive computations is a numerically obscene thing to do. It turns out that a better choice is to use an LU factorization to compute a determinant. But, if you are going to bother to compute LU factors, then why would you possibly want to compute the inverse? You have already done the difficult work by computing the LU factors.

Once you have LU factors, you can use them to do back and forward substitution.

As far as a 19x19 matrix being big, it is not even close to what I'd think of as big.

woodchips
+1  A: 

It is tough to beat Matlab at their game. They also are cautiou about precision. If you have 2.0 and 2.00001 as pivots - watch out! Your answer might end up being very imprecise. Also, check out Python's implementation (it is in numpy / scipy somewhere ...)

Hamish Grubijan
+2  A: 

Your algorithm to compute a determinant is indeed exponential. The basic problem is that you are computing from the definition, and the straight definition leads to an exponential amount of subdeterminants to compute. You really need to transform the matrix first before computing either its determinant or its inverse. (I thought of explaining about dynamic programming, but this problem cannot be solved by dynamic programming as the number of subproblems is exponential too.)

LU decomposition, as recommended by others, is a good choice. If you are new to matrix calculation, you might also want to look at Gaussian elimination to compute determinants and inverses, as that might be a bit easier to comprehend at first.

And one thing to remember in matrix inversion is numerical stability, since you are dealing with floating point numbers. All the good algorithm include permutations of rows and/or columns to choose the suitable pivots, as they are called. At least in Gaussian elimination, you want to, at each step, to permute the columns so that the element largest in absolute value is chosen as the pivot, as this is the stablest choice.

jk
+3  A: 

I would recommend using Apache Commons Math 2.0 for this. JAMA is a dead project. ACM 2.0 actually took linear algebra from JAMA and developed it further.

quant_dev
Didn't know that, quant_dev. Thanks for the info.
duffymo
+1  A: 

Do you have to have an exact solution? An approximate solver (Gauss-Seidel is very performant and easy to implement) will likely work for you, and will converge very quickly. 19x19 is quite a small matrix. I think the Gauss-Seidel code that I used could solve a 128x128 matrix in the blink of an eye (but don't quote me on that, it's been a while).

notJim