views:

83

answers:

3

How is R able to find eigenvectors for the following matrix? Eigenvalues are 2,2 so eigenvectors require solving solve(matrix(c(0,1,0,0),2,2)) which is singular matrix with no solution.

> eigen(matrix(c(2,1,0,2),2,2))
$values
[1] 2 2
$vectors
[,1]          [,2] 
[1,]    0  4.440892e-16
[2,]    1 -1.000000e+00

> solve(matrix(c(0,1,0,0),2,2))
Error in solve.default(matrix(c(0, 1, 0, 0), 2, 2)) : 
Lapack routine dgesv: system is exactly singular

Both the routines essentially do the same thing. They find x such that (A-lambda*I)x = 0 without finding the inverse of A-lambda*I. Clearly (0 1) is a solution but how I can't understand why solve did not come up with it and how do I manually solve it.

A: 

Maybe it's using one of the algorithms listed here:

http://en.wikipedia.org/wiki/List_of_numerical_analysis_topics#Eigenvalue_algorithms

?

According to http://stat.ethz.ch/R-manual/R-devel/library/base/html/eigen.html, eigen seems to use the LAPACK routine at http://netlib.org/lapack/double/dgeev.f (if you have a square matrix which is not symmetric).

Note: you're right that A - lambda * I is singular if lambda is an eigenvalue but in order to find eigenvectors, one does need invert A - lambda * I or solve an equation y = (A - lambda * I) * x (with y not being the null vector). It is sufficient to find non-zero vectors x which satisfy

(A - lambda * I) * x = 0

One strategy is to find a non-singular transformation matrix T such that (A - lambda * I) * T is an upper triangular matrix (i.e. all elements below the diagonal are zero). Because A-lambda*I is singular, T can be constructed such that the last element on the diagonal (or even more diagonal elements if the multiplicity of the eigenvalue is larger than one) is zero.

A vector z which only has it's last element equal to a non-zero value (i.e. z = (0,....,0,1) ) will then give the zero vector when multiplied with (A-lambda *I) * T. So one has:

0 = ((A - lambda * I) * T) * z

or in other words, T*z is an eigenvector of A.

Andre Holzner
Its still not clear to me. That is what 'solve' does, it finds non-zero vectors x satisfying (A-lambda*I)*x = 0 without finding inverse (using LU/QR decomposition methods). If solve fails for (A-lambda*I), then how is eigen able to find possible values of x. They are doing the same thing isn't it?
What I can see from the help text: if you're doing `solve(<matrix>)`this is asking `solve` to invert the matrix (for which there is no solution if `<matrix>` is singular as `A-lambda*I` is if `lambda` is an eigenvalue of `A`)Let's assume that `solve` and `eigen` decompose `A` and `A - lambda * I` into a suitable product of matrices. It is after this decomposition where the two algorithms do different things: while `solve` must try invert the two parts of the product separately does `eigen` try to find non-zero vectors which give zero when multiplied with one of the product members.
Andre Holzner
(disclaimer on my previous comment: I don't know exactly what R does internally, the comment just to tries to explain a possible way of solving the problem)
Andre Holzner
+1  A: 

You asked for an eigen decomposition, you got an eigen decomposition.

Had you asked for rcond(), the condition number of the matrix, or for kappa(), you would have gotten the appropriate response.

For your second example:

> mat <- matrix(c(0,1,0,0), 2, 2)
> kappa(mat)
[1] Inf
> 
> rcond(mat)
[1] 0
>

For your first example, there is actually no problem:

> mat <- matrix(c(2,1,0,2), 2, 2)
> kappa(mat)
[1] 1.772727
> 
> rcond(mat)
[1] 0.5714286
> 
> 

See e.g. this previous question on SO for more.

Dirk Eddelbuettel
A: 

They are not doing the same. Just take a look at the wikipedia link. And look in code for dgeev.f and you will see that a simple solve of (A-lambda*I)x=0 is not performed.

Hendrik Pon