Does Java JIT cheat when running JDK code?

I was benchmarking some code, and I could not get it to run as fast as with java.math.BigInteger , even when using the exact same algorithm. So I copied java.math.BigInteger source into my own package and tried this:

//import java.math.BigInteger;

public class MultiplyTest {
    public static void main(String[] args) {
        Random r = new Random(1);
        long tm = 0, count = 0,result=0;
        for (int i = 0; i < 400000; i++) {
            int s1 = 400, s2 = 400;
            BigInteger a = new BigInteger(s1 * 8, r), b = new BigInteger(s2 * 8, r);
            long tm1 = System.nanoTime();
            BigInteger c = a.multiply(b);
            if (i > 100000) {
                tm += System.nanoTime() - tm1;
                count++;
            }
            result+=c.bitLength();
        }
        System.out.println((tm / count) + "nsec/mul");
        System.out.println(result); 
    }
}

When I run this (jdk 1.8.0_144-b01 on MacOS) it outputs:

12089nsec/mul
2559044166

When I run it with the import line uncommented:

4098nsec/mul
2559044166

It's almost three times as fast when using the JDK version of BigInteger versus my version, even if it's using the exact same code.

I've examined the bytecode with javap, and compared compiler output when running with options:

-Xbatch -XX:-TieredCompilation -XX:+PrintCompilation -XX:+UnlockDiagnosticVMOptions 
-XX:+PrintInlining -XX:CICompilerCount=1

and both versions seem to generate the same code. So is hotspot using some precomputed optimisations that I can't use in my code? I always understood that they don't. What explains this difference?


Yes, HotSpot JVM is kind of "cheating", because it has a special version of some BigInteger methods that you won't find in Java code. These methods are called JVM intrinsics.

In particular, BigInteger.multiplyToLen is instrinsic method in HotSpot. There is a special hand-coded assembly implementation in JVM source base, but only for x86-64 architecture.

You may disable this instrinsic with -XX:-UseMultiplyToLenIntrinsic option to force JVM to use pure Java implementation. In this case the performance will be similar to the performance of your copied code.

PS Here is a list of other HotSpot intrinsic methods.


In Java 8 this is indeed an intrinsic, a slightly modified version of the method:

 private static BigInteger test() {

    Random r = new Random(1);
    BigInteger c = null;
    for (int i = 0; i < 400000; i++) {
        int s1 = 400, s2 = 400;
        BigInteger a = new BigInteger(s1 * 8, r), b = new BigInteger(s2 * 8, r);
        c = a.multiply(b);
    }
    return c;
}

Running this with:

 java -XX:+UnlockDiagnosticVMOptions  
      -XX:+PrintInlining 
      -XX:+PrintIntrinsics 
      -XX:CICompilerCount=2 
      -XX:+PrintCompilation   
       <YourClassName>

This will print lots of lines and one of them will be:

 java.math.BigInteger::multiplyToLen (216 bytes)   (intrinsic)

In Java 9 on the other hand that method seems to not be an intrinsic anymore, but in turn it calls a method that is an intrinsic:

 @HotSpotIntrinsicCandidate
 private static int[] implMultiplyToLen

So running the same code under Java 9 (with the same parameters) will reveal:

java.math.BigInteger::implMultiplyToLen (216 bytes)   (intrinsic)

Underneath it's the same code for the method - just a slightly different naming.

链接地址: http://www.djcxy.com/p/15190.html

上一篇: 为什么使用几个“或”语句的代码比在Java中使用查找表稍快?

下一篇: Java JIT在运行JDK代码时是否会作弊?