Speeding Up Python

This is really two questions, but they are so similar, and to keep it simple, I figured I'd just roll them together:

  • Firstly : Given an established python project, what are some decent ways to speed it up beyond just plain in-code optimization?

  • Secondly : When writing a program from scratch in python, what are some good ways to greatly improve performance?

  • For the first question, imagine you are handed a decently written project and you need to improve performance, but you can't seem to get much of a gain through refactoring/optimization. What would you do to speed it up in this case short of rewriting it in something like C?


    Regarding "Secondly: When writing a program from scratch in python, what are some good ways to greatly improve performance?"

    Remember the Jackson rules of optimization:

  • Rule 1: Don't do it.
  • Rule 2 (for experts only): Don't do it yet.
  • And the Knuth rule:

  • "Premature optimization is the root of all evil."
  • The more useful rules are in the General Rules for Optimization.

  • Don't optimize as you go. First get it right. Then get it fast. Optimizing a wrong program is still wrong.

  • Remember the 80/20 rule.

  • Always run "before" and "after" benchmarks. Otherwise, you won't know if you've found the 80%.

  • Use the right algorithms and data structures. This rule should be first. Nothing matters as much as algorithm and data structure.

  • Bottom Line

    You can't prevent or avoid the "optimize this program" effort. It's part of the job. You have to plan for it and do it carefully, just like the design, code and test activities.


    Rather than just punting to C, I'd suggest:

    Make your code count. Do more with fewer executions of lines:

  • Change the algorithm to a faster one. It doesn't need to be fancy to be faster in many cases.
  • Use python primitives that happens to be written in C. Some things will force an interpreter dispatch where some wont. The latter is preferable
  • Beware of code that first constructs a big data structure followed by its consumation. Think the difference between range and xrange. In general it is often worth thinking about memory usage of the program. Using generators can sometimes bring O(n) memory use down to O(1).
  • Python is generally non-optimizing. Hoist invariant code out of loops, eliminate common subexpressions where possible in tight loops.
  • If something is expensive, then precompute or memoize it. Regular expressions can be compiled for instance.
  • Need to crunch numbers? You might want to check numpy out.
  • Many python programs are slow because they are bound by disk I/O or database access. Make sure you have something worthwhile to do while you wait on the data to arrive rather than just blocking. A weapon could be something like the Twisted framework.
  • Note that many crucial data-processing libraries have C-versions, be it XML, JSON or whatnot. They are often considerably faster than the Python interpreter.
  • If all of the above fails for profiled and measured code, then begin thinking about the C-rewrite path.


    The usual suspects -- profile it, find the most expensive line, figure out what it's doing, fix it. If you haven't done much profiling before, there could be some big fat quadratic loops or string duplication hiding behind otherwise innocuous-looking expressions.

    In Python, two of the most common causes I've found for non-obvious slowdown are string concatenation and generators. Since Python's strings are immutable, doing something like this:

    result = u""
    for item in my_list:
        result += unicode (item)
    

    will copy the entire string twice per iteration. This has been well-covered, and the solution is to use "".join :

    result = "".join (unicode (item) for item in my_list)
    

    Generators are another culprit. They're very easy to use and can simplify some tasks enormously, but a poorly-applied generator will be much slower than simply appending items to a list and returning the list.

    Finally, don't be afraid to rewrite bits in C! Python, as a dynamic high-level language, is simply not capable of matching C's speed. If there's one function that you can't optimize any more in Python, consider extracting it to an extension module.

    My favorite technique for this is to maintain both Python and C versions of a module. The Python version is written to be as clear and obvious as possible -- any bugs should be easy to diagnose and fix. Write your tests against this module. Then write the C version, and test it. Its behavior should in all cases equal that of the Python implementation -- if they differ, it should be very easy to figure out which is wrong and correct the problem.

    链接地址: http://www.djcxy.com/p/39590.html

    上一篇: 我怎样才能加快我的Perl程序?

    下一篇: 加速Python