Performance of Java matrix math libraries?

We are computing something whose runtime is bound by matrix operations. (Some details below if interested.) This experience prompted the following question:

Do folk have experience with the performance of Java libraries for matrix math (eg, multiply, inverse, etc.)? For example:

  • JAMA
  • COLT
  • Apache commons math
  • I searched and found nothing.


    Details of our speed comparison:

    We are using Intel FORTRAN (ifort (IFORT) 10.1 20070913). We have reimplemented it in Java (1.6) using Apache commons math 1.2 matrix ops, and it agrees to all of its digits of accuracy. (We have reasons for wanting it in Java.) (Java doubles, Fortran real*8). Fortran: 6 minutes, Java 33 minutes, same machine. jvisualm profiling shows much time spent in RealMatrixImpl.{getEntry,isValidCoordinate} (which appear to be gone in unreleased Apache commons math 2.0, but 2.0 is no faster). Fortran is using Atlas BLAS routines (dpotrf, etc.).

    Obviously this could depend on our code in each language, but we believe most of the time is in equivalent matrix operations.

    In several other computations that do not involve libraries, Java has not been much slower, and sometimes much faster.


    Just to add my 2 cents. I've compared some of these libraries. I attempted to matrix multiply a 3000 by 3000 matrix of doubles with itself. The results are as follows.

    Using multithreaded ATLAS with C/C++, Octave, Python and R, the time taken was around 4 seconds.

    Using Jama with Java, the time taken was 50 seconds.

    Using Colt and Parallel Colt with Java, the time taken was 150 seconds!

    Using JBLAS with Java, the time taken was again around 4 seconds as JBLAS uses multithreaded ATLAS.

    So for me it was clear that the Java libraries didn't perform too well. However if someone has to code in Java, then the best option is JBLAS. Jama, Colt and Parallel Colt are not fast.


    I'm the author of Java Matrix Benchmark (JMatBench) and I'll give my thoughts on this discussion.

    There are significant difference between Java libraries and while there is no clear winner across the whole range of operations, there are a few clear leaders as can be seen in the latest performance results (October 2013).

    If you are working with "large" matrices and can use native libraries, then the clear winner (about 3.5x faster) is MTJ with system optimised netlib. If you need a pure Java solution then MTJ, OjAlgo, EJML and Parallel Colt are good choices. For small matrices EJML is the clear winner.

    The libraries I did not mention showed significant performance issues or were missing key features.


    I'm the main author of jblas and wanted to point out that I've released Version 1.0 in late December 2009. I worked a lot on the packaging, meaning that you can now just download a "fat jar" with ATLAS and JNI libraries for Windows, Linux, Mac OS X, 32 and 64 bit (except for Windows). This way you will get the native performance just by adding the jar file to your classpath. Check it out at http://jblas.org!

    链接地址: http://www.djcxy.com/p/85978.html

    上一篇: 将3D矩阵与2D矩阵相乘

    下一篇: Java矩阵数学库的性能?