views:

220

answers:

4

Hi I've been doing some research about matrix inversion (linear algebra) and I wanned to use c++ template programming for the algorithm , what i found out that there are number of methods like: Gauss-Jordan Elimination or LU Decomposition and I found the function LU_factorize (c++ boost library)

  1. I want to know if there were other methods , which one is better adv. DisAdv. , from a perspective of programmers or mathematicians ?
  2. If there are no other faster methods is there an already inverse function in the boost library ? , cause i searched alot and didn't find any.
+1  A: 

I'd suggest you to take a look at Eigen source code.

Vitor Py
+3  A: 

As you mention, the standard approach is to perform a LU factorization and then solve for the identity. This can be implemented using the LAPACK library, for example, with dgetrf (factor) and dgetri (compute inverse). Most other linear algebra libraries have roughly equivalent functions.

There are some slower methods that degrade more gracefully when the matrix is singular or nearly singular, and are used for that reason. For example, the Moore-Penrose pseudoinverse is equal to the inverse if the matrix is invertible, and often useful even if the matrix is not invertible; it can be calculated using a Singular Value Decomposition.

Stephen Canon
yes man but isn't LAPCK a Fortran lib ?
ismail marmoush
@ismail: sure, but that doesn't actually matter. it's a *library*; it doesn't matter if it's actually implemented with C or Fortran or Prolog or Scheme, you can call it just fine from C or C++ code.
Stephen Canon
A: 

You might want to use a C++ wrapper around LAPACK. The LAPACK is very mature code: well-tested, optimized, etc.

One such wrapper is the Intel Math Kernel Library.

John D. Cook
IMKL is a LAPACK implementation (and more), not a wrapper.
Staffan
I didn't mean to belittle LAPACK. I meant that it is a C++ interface to code that I presume is written in Fortran unless Intel rewrote everything.
John D. Cook
yes LAPACK is written in Fortran , but man IMKL is for $399 !! :D
ismail marmoush
[ATLAS](http://math-atlas.sourceforge.net/) is open source and pretty good. [ACML](http://developer.amd.com/cpu/Libraries/acml/Pages/default.aspx) is free as far as I can tell, but requires an additional redistribution agreement if you want to distribute it.
Staffan
thanks Staffan for your help I'm now looking at all of the libs and learning
ismail marmoush
+1  A: 

Please Google or Wikipedia for the buzzwords below.

First, make sure you really want the inverse. Solving a system does not require inverting a matrix. Matrix inversion can be performed by solving n systems, with unit basis vectors as right hand sides. So I'll focus on solving systems, because it is usually what you want.

It depends on what "large" means. Methods based on decomposition must generally store the entire matrix. Once you have decomposed the matrix, you can solve for multiple right hand sides at once (and thus invert the matrix easily). I won't discuss here factorization methods, as you're likely to know them already.

Please note that when a matrix is large, its condition number is very likely to be close to zero, which means that the matrix is "numerically non-invertible". Remedy: Preconditionning. Check wikipedia for this. The article is well written.

If the matrix is large, you don't want to store it. If it has a lot of zeros, it is a sparse matrix. Either it has structure (eg. band diagonal, block matrix, ...), and you have specialized methods for solving systems involving such matrices, or it has not.

When you're faced with a sparse matrix with no obvious structure, or with a matrix you don't want to store, you must use iterative methods. They only involve matrix-vector multiplications, which don't require a particular form of storage: you can compute the coefficients when you need them, or store non-zero coefficients the way you want, etc.

The methods are:

  • For symmetric definite positive matrices: conjugate gradient method. In short, solving Ax = b amounts to minimize 1/2 x^T A x - x^T b.
  • Biconjugate gradient method for general matrices. Unstable though.
  • Minimum residual methods, or best, GMRES. Please check the wikipedia articles for details. You may want to experiment with the number of iterations before restarting the algorithm.

And finally, you can perform some sort of factorization with sparse matrices, with specially designed algorithms to minimize the number of non-zero elements to store.

Alexandre C.