Alternatives to gprof

其他程序与gprof有什么相同之处?


Valgrind has an instruction-count profiler with a very nice visualizer called KCacheGrind. As Mike Dunlavey recommends, Valgrind counts the fraction of instructions for which a procedure is live on the stack, although I'm sorry to say it appears to become confused in the presence of mutual recursion. But the visualizer is very nice and light years ahead of gprof .


gprof (read the paper) exists for historical reasons. If you think it will help you find performance problems, it was never advertised as such. Here's what the paper says:

The profile can be used to compare and assess the costs of various implementations.

It does not say it can be used to identify the various implementations to be assessed, although it does imply that it could, under special circumstances:

especially if small portions of the program are found to dominate its execution time.

What about problems that are not so localized? Do those not matter? Don't place expectations on gprof that were never claimed for it. It is only a measurement tool, and only of CPU-bound operations.

Try this instead.
Here's an example of a 44x speedup.
Here's a 730x speedup.
Here's an 8-minute video demonstration.
Here's an explanation of the statistics.
Here's an answer to critiques.

There's a simple observation about programs. In a given execution, every instruction is responsible for some fraction of the overall time (especially call instructions), in the sense that if it were not there, the time would not be spent. During that time, the instruction is on the stack **. When that is understood, you can see that -

gprof embodies certain myths about performance, such as:

  • that program counter sampling is useful.
    It is only useful if you have an unnecessary hotspot bottleneck such as a bubble sort of a big array of scalar values. As soon as you, for example, change it into a sort using string-compare, it is still a bottleneck, but program counter sampling will not see it because now the hotspot is in string-compare. On the other hand if it were to sample the extended program counter (the call stack), the point at which the string-compare is called, the sort loop, is clearly displayed. In fact, gprof was an attempt to remedy the limitations of pc-only sampling.

  • that timing functions is more important than capturing time-consuming lines of code.
    The reason for that myth is that gprof was not able to capture stack samples, so instead it times functions, counts their invocations, and tries to capture the call graph. However, once a costly function is identified, you still need to look inside it for the lines that are responsible for the time. If there were stack samples you would not need to look, those lines would be on the samples. (A typical function might have 100 - 1000 instructions. A function call is 1 instruction, so something that locates costly calls is 2-3 orders of magnitude more precise.)

  • that the call graph is important.
    What you need to know about a program is not where it spends its time, but why . When it is spending time in a function, every line of code on the stack gives one link in the chain of reasoning of why it is there. If you can only see part of the stack, you can only see part of the reason why, so you can't tell for sure if that time is actually necessary. What does the call graph tell you? Each arc tells you that some function A was in the process of calling some function B for some fraction of the time. Even if A has only one such line of code calling B, that line only gives a small part of the reason why. If you are lucky enough, maybe that line has a poor reason. Usually, you need to see multiple simultaneous lines to find a poor reason if it is there. If A calls B in more than one place, then it tells you even less.

  • that recursion is a tricky confusing issue.
    That is only because gprof and other profilers perceive a need to generate a call-graph and then attribute times to the nodes. If one has samples of the stack, the time-cost of each line of code that appears on samples is a very simple number - the fraction of samples it is on. If there is recursion, then a given line can appear more than once on a sample. No matter. Suppose samples are taken every N ms, and the line appears on F% of them (singly or not). If that line can be made to take no time (such as by deleting it or branching around it), then those samples would disappear, and the time would be reduced by F%.

  • that accuracy of time measurement (and therefore a large number of samples) is important.
    Think about it for a second. If a line of code is on 3 samples out of five, then if you could shoot it out like a light bulb, that is roughly 60% less time that would be used. Now, you know that if you had taken a different 5 samples, you might have only seen it 2 times, or as many as 4. So that 60% measurement is more like a general range from 40% to 80%. If it were only 40%, would you say the problem is not worth fixing? So what's the point of time accuracy, when what you really want is to find the problems ? 500 or 5000 samples would have measured the problem with greater precision, but would not have found it any more accurately.

  • that counting of statement or function invocations is useful.
    Suppose you know a function has been called 1000 times. Can you tell from that what fraction of time it costs? You also need to know how long it takes to run, on average, multiply it by the count, and divide by the total time. The average invocation time could vary from nanoseconds to seconds, so the count alone doesn't tell much. If there are stack samples, the cost of a routine or of any statement is just the fraction of samples it is on. That fraction of time is what could in principle be saved overall if the routine or statement could be made to take no time, so that is what has the most direct relationship to performance.

  • that samples need not be taken when blocked
    The reasons for this myth are twofold: 1) that PC sampling is meaningless when the program is waiting, and 2) the preoccupation with accuracy of timing. However, for (1) the program may very well be waiting for something that it asked for, such as file I/O, which you need to know, and which stack samples reveal. (Obviously you want to exclude samples while waiting for user input.) For (2) if the program is waiting simply because of competition with other processes, that presumably happens in a fairly random way while it's running. So while the program may be taking longer, that will not have a large effect on the statistic that matters, the percentage of time that statements are on the stack.

  • that "self time" matters
    Self time only makes sense if you are measuring at the function level, not line level, and you think you need help in discerning if the function time goes into purely local computation versus in called routines. If summarizing at the line level, a line represents self time if it is at the end of the stack, otherwise it represents inclusive time. Either way, what it costs is the percentage of stack samples it is on, so that locates it for you in either case.

  • that samples have to be taken at high frequency
    This comes from the idea that a performance problem may be fast-acting, and that samples have to be frequent in order to hit it. But, if the problem is costing, 20%, say, out of a total running time of 10 sec (or whatever), then each sample in that total time will have a 20% chance of hitting it, no matter if the problem occurs in a single piece like this
    .....XXXXXXXX...........................
    .^.^.^.^.^.^.^.^.^.^.^.^.^.^.^.^.^.^.^.^ (20 samples, 4 hits)
    or in many small pieces like this
    X...X...XX.X.........X.....X....X.....
    .^.^.^.^.^.^.^.^.^.^.^.^.^.^.^.^.^.^.^.^ (20 samples, 3 hits)
    Either way, the number of hits will average about 1 in 5, no matter how many samples are taken, or how few. (Average = 20 * 0.2 = 4. Standard deviation = +/- sqrt(20 * 0.2 * 0.8) = 1.8.)

  • that you are trying to find the bottleneck
    as if there were only one. Consider the following execution timeline: vxvWvzvWvxvWvYvWvxvWv.vWvxvWvYvW
    It consists of real useful work, represented by . . There are performance problems vWxYz taking 1/2, 1/4, 1/8, 1/16, 1/32 of the time, respectively. Sampling finds v easily. It is removed, leaving
    xWzWxWYWxW.WxWYW
    Now the program takes half as long to run, and now W takes half the time, and is found easily. It is removed, leaving
    xzxYx.xY
    This process continues, each time removing the biggest, by percentage, performance problem, until nothing to remove can be found. Now the only thing executed is . , which executes in 1/32 of the time used by the original program. This is the magnification effect, by which removing any problem makes the remainder larger, by percent, because the denominator is reduced.
    Another crucial point is that every single problem must be found - missing none of the 5. Any problem not found and fixed severely reduces the final speedup ratio. Just finding some, but not all, is not "good enough".

  • ADDED: I would just like to point out one reason why gprof is popular - it is being taught, presumably because it's free, easy to teach, and it's been around a long time. A quick Google search locates some academic institutions that teach it (or appear to):

    berkeley bu clemson colorado duke earlham fsu indiana mit msu ncsa.illinois ncsu nyu ou princeton psu stanford ucsd umd umich utah utexas utk wustl

    ** With the exception of other ways of requesting work to be done, that don't leave a trace telling why, such as by message posting.


    Since I did't see here anything about perf which is a relatively new tool for profiling the kernel and user applications on Linux I decided to add this information.

    First of all - this is a tutorial about Linux profiling with perf

    You can use perf if your Linux Kernel is greater than 2.6.32 or oprofile if it is older. Both programs don't require from you to instrument your program (like gprof requires). However in order to get call graph correctly in perf you need to build you program with -fno-omit-frame-pointer . For example: g++ -fno-omit-frame-pointer -O2 main.cpp .

    You can see "live" analysis of your application with perf top :

    sudo perf top -p `pidof a.out` -K
    

    Or you can record performance data of a running application and analyze them after that:

    1) To record performance data:

    perf record -p `pidof a.out`
    

    or to record for 10 secs:

    perf record -p `pidof a.out` sleep 10
    

    or to record with call graph ()

    perf record -g -p `pidof a.out` 
    

    2) To analyze the recorded data

    perf report --stdio
    perf report --stdio --sort=dso -g none
    perf report --stdio -g none
    perf report --stdio -g
    

    Or you can record performace data of a application and analyze them after that just by launching the application in this way and waiting for it to exit:

    perf record ./a.out
    

    This is an example of profiling a test program

    The test program is in file main.cpp (I will put main.cpp at the bottom of the message):

    I compile it in this way:

    g++ -m64 -fno-omit-frame-pointer -g main.cpp -L.  -ltcmalloc_minimal -o my_test
    

    I use libmalloc_minimial.so since it is compiled with -fno-omit-frame-pointer while libc malloc seems to be compiled without this option. Then I run my test program

    ./my_test 100000000 
    

    Then I record performance data of a running process:

    perf record -g  -p `pidof my_test` -o ./my_test.perf.data sleep 30
    

    Then I analyze load per module:

    perf report --stdio -g none --sort comm,dso -i ./my_test.perf.data

    # Overhead  Command                 Shared Object
    # ........  .......  ............................
    #
        70.06%  my_test  my_test
        28.33%  my_test  libtcmalloc_minimal.so.0.1.0
         1.61%  my_test  [kernel.kallsyms]
    

    Then load per function is analyzed:

    perf report --stdio -g none -i ./my_test.perf.data | c++filt

    # Overhead  Command                 Shared Object                       Symbol
    # ........  .......  ............................  ...........................
    #
        29.30%  my_test  my_test                       [.] f2(long)
        29.14%  my_test  my_test                       [.] f1(long)
        15.17%  my_test  libtcmalloc_minimal.so.0.1.0  [.] operator new(unsigned long)
        13.16%  my_test  libtcmalloc_minimal.so.0.1.0  [.] operator delete(void*)
         9.44%  my_test  my_test                       [.] process_request(long)
         1.01%  my_test  my_test                       [.] operator delete(void*)@plt
         0.97%  my_test  my_test                       [.] operator new(unsigned long)@plt
         0.20%  my_test  my_test                       [.] main
         0.19%  my_test  [kernel.kallsyms]             [k] apic_timer_interrupt
         0.16%  my_test  [kernel.kallsyms]             [k] _spin_lock
         0.13%  my_test  [kernel.kallsyms]             [k] native_write_msr_safe
    
         and so on ...
    

    Then call chains are analyzed:

    perf report --stdio -g graph -i ./my_test.perf.data | c++filt

    # Overhead  Command                 Shared Object                       Symbol
    # ........  .......  ............................  ...........................
    #
        29.30%  my_test  my_test                       [.] f2(long)
                |
                --- f2(long)
                   |
                    --29.01%-- process_request(long)
                              main
                              __libc_start_main
    
        29.14%  my_test  my_test                       [.] f1(long)
                |
                --- f1(long)
                   |
                   |--15.05%-- process_request(long)
                   |          main
                   |          __libc_start_main
                   |
                    --13.79%-- f2(long)
                              process_request(long)
                              main
                              __libc_start_main
    
        15.17%  my_test  libtcmalloc_minimal.so.0.1.0  [.] operator new(unsigned long)
                |
                --- operator new(unsigned long)
                   |
                   |--11.44%-- f1(long)
                   |          |
                   |          |--5.75%-- process_request(long)
                   |          |          main
                   |          |          __libc_start_main
                   |          |
                   |           --5.69%-- f2(long)
                   |                     process_request(long)
                   |                     main
                   |                     __libc_start_main
                   |
                    --3.01%-- process_request(long)
                              main
                              __libc_start_main
    
        13.16%  my_test  libtcmalloc_minimal.so.0.1.0  [.] operator delete(void*)
                |
                --- operator delete(void*)
                   |
                   |--9.13%-- f1(long)
                   |          |
                   |          |--4.63%-- f2(long)
                   |          |          process_request(long)
                   |          |          main
                   |          |          __libc_start_main
                   |          |
                   |           --4.51%-- process_request(long)
                   |                     main
                   |                     __libc_start_main
                   |
                   |--3.05%-- process_request(long)
                   |          main
                   |          __libc_start_main
                   |
                    --0.80%-- f2(long)
                              process_request(long)
                              main
                              __libc_start_main
    
         9.44%  my_test  my_test                       [.] process_request(long)
                |
                --- process_request(long)
                   |
                    --9.39%-- main
                              __libc_start_main
    
         1.01%  my_test  my_test                       [.] operator delete(void*)@plt
                |
                --- operator delete(void*)@plt
    
         0.97%  my_test  my_test                       [.] operator new(unsigned long)@plt
                |
                --- operator new(unsigned long)@plt
    
         0.20%  my_test  my_test                       [.] main
         0.19%  my_test  [kernel.kallsyms]             [k] apic_timer_interrupt
         0.16%  my_test  [kernel.kallsyms]             [k] _spin_lock
         and so on ...
    

    So at this point you know where your program spends time.

    And this is main.cpp for the test:

    #include <stdio.h>
    #include <stdlib.h>
    #include <time.h>
    
    time_t f1(time_t time_value)
    {
      for (int j =0; j < 10; ++j) {
        ++time_value;
        if (j%5 == 0) {
          double *p = new double;
          delete p;
        }
      }
      return time_value;
    }
    
    time_t f2(time_t time_value)
    {
      for (int j =0; j < 40; ++j) {
        ++time_value;
      }
      time_value=f1(time_value);
      return time_value;
    }
    
    time_t process_request(time_t time_value)
    {
    
      for (int j =0; j < 10; ++j) {
        int *p = new int;
        delete p;
        for (int m =0; m < 10; ++m) {
          ++time_value;
        }
      }
      for (int i =0; i < 10; ++i) {
        time_value=f1(time_value);
        time_value=f2(time_value);
      }
      return time_value;
    }
    
    int main(int argc, char* argv2[])
    {
      int number_loops = argc > 1 ? atoi(argv2[1]) : 1;
      time_t time_value = time(0);
      printf("number loops %dn", number_loops);
      printf("time_value: %dn", time_value );
    
      for (int i =0; i < number_loops; ++i) {
        time_value = process_request(time_value);
      }
      printf("time_value: %ldn", time_value );
      return 0;
    }
    
    链接地址: http://www.djcxy.com/p/40328.html

    上一篇: 剖析C ++ multi

    下一篇: gprof的替代品