How can I measure time with microsecond precision in Java?

I saw on the Internet that I was supposed to use System.nanoTime() but that doesn't work for me - it gives me the time with milliseconds precision. I just need the microseconds before and after my function executes so that I know how long it takes. I'm using Windows XP.

Basically, I have this code that, for example, does 1 million up to 10 millions of insertions in a java linked list. The problem is that I can't measure the precision right; sometimes it takes less time to insert everything in the smaller list.

Here's an example:

class test
{
    public static void main(String args[])
    {
        for(int k=1000000; k<=10000000; k+=1000000)
        {
            System.out.println(k);
            LinkedList<Integer> aux = new LinkedList<Integer>();
            //need something here to see the start time
            for(int i=0; i<k; i++)
                aux.addFirst(10000);
            //need something here to see the end time
            //print here the difference between both times
        }
    }
}

I did this many times - there was an exterior loop doing it 20 times for each k - but the result aren't good. Sometimes it takes less time to to make 10 million insertions than 1 million, because I'm not getting the correct measured time with what I'm using now (System.nanoTime())

Edit 2: Yes, I'm using the Sun JVM.

Edit 3: I may have done something wrong in the code, I'll see if changing it does what I want.

Edit 4: My mistake, it seems System.nanoTime() works. Phew.


我的猜测是,由于System.nanoTime()使用“最精确的可用系统定时器”,它显然在系统上只有毫秒精度,所以你不能得到任何更好的结果。


It's not clear to me exactly what you're benchmarking, but in general any test which takes such a short amount of time to run, that accuracy lower than 50 ms is relevant, is going to be very prone to other disturbances.

I generally try to make benchmarks run for at least 10 seconds. The framework I'm writing at the moment will guess how many iterations to run so that it will take 30 seconds. That means you won't get radically different results just because some other process stole the CPU for a few milliseconds.

Running for longer is almost always a better approach than trying to measure with finer-grained accuracy.


System.nanoTime() uses a counter in the CPU and is usually accurate to about 1 micro-second on Windows XP and Linux.

Note: Windows XP is often less accurate on multi-cpu machines as it doesn't compensate for different CPUs having different counters. Linux does. Note 2: It will drift relative to the System.currentTimeMillis() as it is based on the accuracy of the clock for your CPU (which doesn't need to be so accurate over a period of time), rather than the clock you have for getting the time.(which drifts less per day, but has less granularity)

In your benchmark you are basically testing the speed at which you can create new objects. Not surprisingly your results will vary dramatically based on your GC settings and how recently a GC has been performed.

Try running your tests with the following options and you should see very different results.

-verbosegc -XX:NewSize=128m -mx256m

链接地址: http://www.djcxy.com/p/15180.html

上一篇: 我如何创建一个文件并在Java中写入文件?

下一篇: 我如何在Java中以微秒精度测量时间?