views:

1194

answers:

7

It seems that many projects slowly come upon a need to do matrix math, and fall into the trap of first building some vector classes and slowly adding in functionality until they get caught building a half-assed custom linear algebra library, and depending on it.

I'd like to avoid that while not building in a dependence on some tangentially related library (e.g. OpenCV, OpenSceneGraph).

What are the commonly used matrix math/linear algebra libraries out there, and why would decide to use one over another? Are there any that would be advised against using for some reason? I am specifically using this in a geometric/time context*(2,3,4 Dim)* but may be using higher dimensional data in the future.

I'm looking for differences with respect to any of: API, speed, memory use, breadth/completeness, narrowness/specificness, extensibility, and/or maturity/stability.

(Edit/note: There's a bit of information in the answers, but it's scattered and without context. I'm unsure if I should pull it together in another answer, but I'm still not particularly clear on the benefits or downsides to any of these choices over another)

+4  A: 

I've heard good things about Eigen and NT2, but haven't personally used either. There's also Boost.UBLAS, which I believe is getting a bit long in the tooth. The developers of NT2 are building the next version with the intention of getting it into Boost, so that might count for somthing.

My lin. alg. needs don't exteed beyond the 4x4 matrix case, so I can't comment on advanced functionality; I'm just pointing out some options.

Jeff Hardy
In my experience (larger matrices), Boost.UBLAS is used more. However, when I looked into it, I didn't like it (mainly because of the documentation) so I concentrated on Eigen. Eigen has a [geometry module](http://eigen.tuxfamily.org/dox-devel/group__Geometry__Module.html), but I haven't used it myself.
Jitse Niesen
Eigen is apparently used by ROS (willow garage), Celestia, Koffice, and libmv. I see some chatter about UBLAS, but had a hard time coming across project who advertise using it. Ditto for NT2.Can you elaborate on what good things you've heard?
Catskul
It was in a discussion on the Boost mailing list about adding a modern LinAlg library to Boost - Eigen and NT2 were both mentioned as possible candidates, but only the NT2 developers expressed interest in pursuing it. Both libraries seemed decent; as you said, Eigen is a little more popular, and also more C++-ish; NT2 is designed to mimic MATLAB as much as possible.
Jeff Hardy
+12  A: 

There are quite a few projects that have settled on the Generic Graphics Toolkit for this. The GMTL in there is nice - it's quite small, very functional, and been used widely enough to be very reliable. OpenSG, VRJuggler, and other projects have all switched to using this instead of their own hand-rolled vertor/matrix math.

I've found it quite nice - it does everything via templates, so it's very flexible, and very fast.


Edit:

After the comments discussion, and edits, I thought I'd throw out some more information about the benefits and downsides to specific implementations, and why you might choose one over the other, given your situation.

GMTL -

Benefits: Simple API, specifically designed for graphics engines. Includes many primitive types geared towards rendering (such as planes, AABB, quatenrions with multiple interpolation, etc) that aren't in any other packages. Very low memory overhead, quite fast, easy to use.

Downsides: API is very focused specifically on rendering and graphics. Doesn't include general purpose (NxM) matrices, matrix decomposition and solving, etc, since these are outside the realm of traditional graphics/geometry applications.

Eigen -

Benefits: Clean API, fairly easy to use. Includes a Geometry module with quaternions and geometric transforms. Low memory overhead. Full, highly performant solving of large NxN matrices and other general purpose mathematical routines.

Downsides: May be a bit larger scope than you are wanting (?). Fewer geometric/rendering specific routines when compared to GMTL (ie: Euler angle definitions, etc).

IMSL -

Benefits: Very complete numeric library. Very, very fast (supposedly the fastest solver). By far the largest, most complete mathematical API. Commercially supported, mature, and stable.

Downsides: Cost - not inexpensive. Very few geometric/rendering specific methods, so you'll need to roll your own on top of their linear algebra classes.

NT2 -

Benefits: Provides syntax that is more familiar if you're used to MATLAB. Provides full decomposition and solving for large matrices, etc.

Downsides: Mathematical, not rendering focused. Probably not as performant as Eigen.

LAPACK -

Benefits: Very stable, proven algorithms. Been around for a long time. Complete matrix solving, etc. Many options for obscure mathematics.

Downsides: Not as highly performant in some cases. Ported from Fortran, with odd API for usage.

Personally, for me, it comes down to a single question - how are you planning to use this. If you're focus is just on rendering and graphics, I like Generic Graphics Toolkit, since it performs well, and supports many useful rendering operations out of the box without having to implement your own. If you need general purpose matrix solving (ie: SVD or LU decomposition of large matrices), I'd go with Eigen, since it handles that, provides some geometric operations, and is very performant with large matrix solutions. You may need to write more of your own graphics/geometric operations (on top of their matrices/vectors), but that's not horrible.

Reed Copsey
Did you evaluate other libraries before deciding on GMTL? Superficial comparison lead me to believe that Eigen was better supported, but thats on the basis of reviewing the respective websites. Are you aware of any specific advantages of one over the other?
Catskul
Eigen works well, too. It was not as mature at the time I did my investigation, but I believe it would be a good option at this point. GMTL has been used fairly widely, and was very mature and solid when I decided to use it.
Reed Copsey
I guess to pare down my question to the very crux: Did you make your choice subjectively like "This looks better" or where there specific features (api, speed, memory use, breadth, narrowness, extensibility) that made the difference. I suppose maturity falls under this criteria, but if maturity were the only metric, I imagine you would have selected a BLAS or LAPACK based option.
Catskul
I chose this after trying multiple options, and based it off: performance, usability, and low runtime/compile time overhead. Eigen looks much better now than it did at that point, so I can't judge between them. However, I've been very happy with GMTL for our uses.
Reed Copsey
That's part of why I like GMTL, and used it. It just felt very natural to use, and was very, very easy to work with. It also supported everything I needed, in this case, since I was just worried about directly handling geometric transformation and quaternion rotations.
Reed Copsey
oops accidentally delete my comment: I've started using Eigen and come across a compelling reason not to use it: In their quest to be efficient they have taken some pretty extreme steps such that if you don't follow them, you will crash. You can't even pass by value without crashing. To me this implies that the api is not clean. eigen.tuxfamily.org/dox/UnalignedArrayAssert.html/
Catskul
+3  A: 

I'm new to this topic, so I can't say a whole lot, but BLAS is pretty much the standard in scientific computing. BLAS is actually an API standard, which has many implementations. I'm honestly not sure which implementations are most popular or why.

If you want to also be able to do common linear algebra operations (solving systems, least squares regression, decomposition, etc.) look into LAPACK.

notJim
+1  A: 

Okay, I think I know what you're looking for. It appears that GGT is a pretty good solution, as Reed Copsey suggested.

Personally, we rolled our own little library, because we deal with rational points a lot - lots of rational NURBS and Beziers.

It turns out that most 3D graphics libraries do computations with projective points that have no basis in projective math, because that's what gets you the answer you want. We ended up using Grassmann points, which have a solid theoretical underpinning and decreased the number of point types. Grassmann points are basically the same computations people are using now, with the benefit of a robust theory. Most importantly, it makes things clearer in our minds, so we have fewer bugs. Ron Goldman wrote a paper on Grassmann points in computer graphics called "On the Algebraic and Geometric Foundations of Computer Graphics".

Not directly related to your question, but an interesting read.

tfinniga
It's intentionally open-ended in that I'm unaware of what the trade-off's are. It's probably fair to say that geometry is our main concern the dimensionality of the geometry is not clear. Currently it is 2/3 (2 + time) and could hypothetically be quite high (3dims + time + multi-dim-costmaps).
Catskul
I'm in agreement with the question. For instance, a lot of applications of this sort need realtime (consistent time behavior) performance, while many others are just fine giving up consistency and/or speed for accuracy.
T.E.D.
So are you saying that of the libraries you investigated, none took care of NURBS and Beziers? Any particular reason for not taking one of the existing libraries and building the NURBS and Bezier support in along side of it?
Catskul
What I was trying to say is that rational NURBS and Beziers use rational control points much more than most 3d applications, so we were making more mistakes. Typically most 3d apps only have vanilla 3d points and vectors until after going through the perspective transform. Many of our algorithms have to be able to correctly handle weighted/rational/projective and cartesian points, going back and forth, etc.
tfinniga
A: 

I'll add vote for Eigen: I ported a lot of code (3D geometry, linear algebra and differential equations) from different libraries to this one - improving both performance and code readability in almost all cases.

One advantage that wasn't mentioned: it's very easy to use SSE with Eigen, which significantly improves performance of 2D-3D operations (where everything can be padded to 128 bits).

ima
The whole "if you do this then make sure to..." thing strikes me as a bit of a red flag. So far I've twice run into these issues and I just started using it. I really was hoping not to burden future developers for knowing all kinds of idiosyncrasies of each library included: specifically the alignment issues where it crashes if you dont use certain macros each time you have members, and the fact that they've spread functionality for individual classes across multiple headers.Alone it might not prevent me from choosing it, but it's sent up a bit of a red flag.
Catskul
Alignment and that macro only matters if you use SSE, which is by no means required. And if you do use SIMD, those issues will rise whatever library you use. At least Eigen doesn't just crash, but provides meaningful error messages which directly point to the problem.
ima
And there is an easy way to avoid alignment macro - use pointers or references as members.
ima
I dont think that is true. I used no special SSE options and got several crashes after using it with stl containers. Yes I know it gives you helpful messages, and Yes I know there are special instructions, but that's my point. I dont want to burden other developers with special instructions for each included library. The don't pass by value thing for example is just too much.
Catskul
I just found out that the latest development branch has some defines you can use to turn off the alignment and avoid the related issues.
Catskul
+3  A: 
Catskul
Regarding the Eigen aligned asserts: to get high performance out of SSE(1,2,3 or 4) operations for small sets of data, you absolutely need aligned data. The unaligned load/store operations are much slower. The decision between aligned or unaligned load/store also takes time.Any "general purpose" implementation would have a really tough time doing the right thing for everyone, unless they separated the interface into "aligned" and "unaligned" operations as well - and then it's again simply not very general purpose.
MadKeithV
+2  A: 

If you are looking for high performance matrix/linear algebra/optimization on Intel processors, I'd look at Intel's MKL library.

MKL is carefully optimized for fast run-time performance - much of it based on the very mature BLAS/LAPACK fortran standards. And its performance scales with the number of cores available. Hands-free scalability with available cores is the future of computing and I wouldn't use any math library for a new project doesn't support multi-core processors.

Very briefly, it includes:

  1. Basic vector-vector, vector-matrix, and matrix-matrix operations
  2. Matrix factorization (LU decomp, hermitian,sparse)
  3. Least squares fitting and eigenvalue problems
  4. Sparse linear system solvers
  5. Non-linear least squares solver (trust regions)
  6. Plus signal processing routines such as FFT and convolution
  7. Very fast random number generators (mersenne twist)
  8. Much more.... see: link text

A downside is that the MKL API can be quite complex depending on the routines that you need. You could also take a look at their IPP (Integrated Performance Primitives) library which is geared toward high performance image processing operations, but is nevertheless quite broad.

Paul

CenterSpace Software ,.NET Math libraries, centerspace.net

Paul