views:

984

answers:

3

Hi, I was hoping someone can point out an efficient formula for 4x4 affine matrix transform. Currently my code uses cofactor expansion and it allocates a temporary array for each cofactor. It's easy to read, but it's slower than it should be.

Note, this isn't homework and I know how to work it out manually using 4x4 co-factor expansion, it's just a pain and not really an interesting problem for me. Also I've googled and came up with a few sites that give you the formula already (http://www.euclideanspace.com/maths/algebra/matrix/functions/inverse/fourD/index.htm). However this one could probably be optimized further by pre-computing some of the products. I'm sure someone came up with the "best" formula for this at one point or another?

Thanks.

A: 

I believe the only way to compute an inverse is to solve n times the equation: A x = y, where y spans the unit vectors, i.e., the first one is (1,0,0,0), the second is (0,1,0,0), etc.

(Using the cofactors (Cramer's rule) is a bad idea, unless you want a symbolic formula for the inverse.)

Most linear algebra libraries will allow you to solve those linear systems, and even to compute an inverse. Example in python (using numpy):

from numpy.linalg import inv
inv(A) # here you go
Olivier
+2  A: 

IIRC you can greatly shrink the code and time by precomputing a bunch (12?) 2x2 determinants. Split the matrix in half vertically and compute every 2x2 in both the upper and lower half. One of these smaller determinants is used in every term you'll need for the bigger computation and they each get reused.

Also, don't use a separate determinant function - reuse the sub-determinants you computed for the adjoint to get the determinant.

Oh, just found this.

There are some improvements you can make knowing its a certain kind of transform too.

phkahler
Thanks, this saves me a lot of time!
Budric
Very fast, good explanation. Appears to work (haven't run it against a full regression test). Thanks again.
Budric
+1 for the link; however, I think it's a mistake to compute those inverses symbolically... you must realize how many unnecessary multiplications/additions you are performing. It's probably ok as long as this part of the code is not the bottleneck.
Olivier
I use 4x4s for a lot of things, so I prefer the generalized inverse. Like I said, you can do better with specific types of transform. The linked paper is still useful for doing the 3x3 inverse the questioner seems to be using. And you can do even better still if you know the 3x3 is a pure rotation - IIRC it's inverse is the transpose.
phkahler
+3  A: 

You should be able to exploit the fact that the matrix is affine to speed things up over a full inverse. Namely, if your matrix looks like this

A = [ M   b  ]
    [ 0   1  ]

where A is 4x4, M is 3x3, b is 3x1, and the bottom row is (0,0,0,1), then

inv(A) = [ inv(M)   -inv(M) * b ]
         [   0            1     ]

Depending on your situation, it may be faster to compute the result of inv(A) * x instead of actually forming inv(A). In that case, things simplify to

inv(A) * [x] = [ inv(M) * (x - b) ]
         [1] = [        1         ] 

where x is a 3x1 vector (usually a 3D point).

Lastly, if M represents a rotation (i.e. its columns are orthonormal), then you can use the fact that inv(M) = transpose(M). Then computing the inverse of A is just a matter of subtracting the translation component, and multiplying by the transpose of the 3x3 part.

Note that whether or not the matrix is orthonormal is something that you should know from the analysis of the problem. Checking it during runtime would be fairly expensive; although you might want to do it in debug builds to check that your assumptions hold.

Hope all of that is clear...

celion
So the first formula you got from "blockwise inversion" (http://en.wikipedia.org/wiki/Invertible_matrix)? Or is there another mental trick? I'm not quite sure about the inv(A) * x = inv(M) * (x - b). First, they're different sizes - do you remove a row from A on the left or add a row on the right? Second, I'm not sure how that equation comes about. Third, I'm not sure what you're solving for in that equation. Oliver keeps mentioning not computing inverses symbolically, but I don't know what that means - I need the inverse to do inverse transform. If you have time I'd like to know.
Budric
I edited the inv(A) * x formula to make the dimensions clearer.The first formula was from http://en.wikipedia.org/wiki/Affine_transformation.Forgetting about affine transforms for a minute, in general, when you're solving A*x = b, you want the solution inv(A)*b. But often times you don't need to actual form inv(A), just compute what the product *would* be. Back to affine transforms, in 3D applications, you might not actually need the inverse of the matrix, you just want the inverse transform *acting on* (multiplying) a vector. If that's the case, then using the formula might be faster.
celion
Even if you do need to store the matrix inverse, you can use the fact that it's affine to reduce the work computing the inverse, since you only need to invert a 3x3 matrix instead of 4x4. And if you know that it's a rotation, computing the transpose is *much* faster than computing the inverse, and in this case, they're equivalent.
celion
And here's a better explanation of what I meant by computing inv(A) * x: http://www.johndcook.com/blog/2010/01/19/dont-invert-that-matrix/ (the author is a frequent poster on SO).
celion
Clear now. Very interesting.
Budric