ublas vs. matrix template library (MTL4)

dodol picture dodol · Jul 1, 2009 · Viewed 11.1k times · Source

I'm writing a software for hyperbolic partial differential equations in c++. Almost all notations are vector and matrix ones. On top of that, I need the linear algebra solver. And yes, the vector's and matrix's sizes can vary considerably (from say 1000 to sizes that can be solved only by distributed memory computing, eg. clusters or similar architecture). If I had lived in utopia, I'd had had linear solver which scales great for clusters, GPUs and multicores.

When thinking about the data structure that should represent the variables, I came accros the boost.ublas and MTL4. Both libraries are blas level 3 compatible, MTL4 implements sparse solver and is much faster than ublas. They both don't have implemented support for multicore processors, not to mention parallelization for distributed memory computations. On the other hand, the development of MTL4 depends on sole effort of 2 developers (at least as I understood), and I'm sure there is a reason that the ublas is in the boost library. Furthermore, intel's mkl library includes the example for binding their structure with ublas. I'd like to bind my data and software to the data structure that will be rock solid, developed and maintained for long period of time.

Finally, the question. What is your experience with the use of ublas and/or mtl4, and what would you recommend?

thanx, mightydodol

Answer

stephan picture stephan · Jul 1, 2009

With your requirements, I would probably go for BOOST::uBLAS. Indeed, a good deployment of uBLAS should be roughly on par with MTL4 regarding speed.

The reason is that there exist bindings for ATLAS (hence shared-memory parallelization that you can efficiently optimize for your computer), and also vendor-tuned implementations like the Intel Math Kernel Library or HP MLIB.

With these bindings, uBLAS with a well-tuned ATLAS / BLAS library doing the math should be fast enough. If you link against a given BLAS / ATLAS, you should be roughly on par with MTL4 linked against the same BLAS / ATLAS using the compiler flag -DMTL_HAS_BLAS, and most likely faster than the MTL4 without BLAS according to their own observation (example see here, where GotoBLAS outperforms MTL4).

To sum up, speed should not be your decisive factor as long as you are willing to use some BLAS library. Usability and support is more important. You have to decide, whether MTL or uBLAS is better suited for you. I tend towards uBLAS given that it is part of BOOST, and MTL4 currently only supports BLAS selectively. You might also find this slightly dated comparison of scientific C++ packages interesting.

One big BUT: for your requirements (extremely big matrices), I would probably skip the "syntactic sugar" uBLAS or MTL, and call the "metal" C interface of BLAS / LAPACK directly. But that's just me... Another advantage is that it should be easier than to switch to ScaLAPACK (distributed memory LAPACK, have never used it) for bigger problems. Just to be clear: for house-hold problems, I would not suggest calling a BLAS library directly.