When is optimization premature?

I see this term used a lot but I feel like most people use it out of laziness or ignorance. For instance, I was reading this article:

http://blogs.msdn.com/b/ricom/archive/2006/09/07/745085.aspx

where he talks about his decisions he makes to implement the types necessary for his app.

If it was me, talking about these for code that we need to write, other programmers would think either:

  • I am thinking way too much ahead when there is nothing and thus prematurely optimizing.
  • Over-thinking insignificant details when there is no slowdowns or performance problems experienced.
  • or both.

    and would suggest to just implement it and not worry about these until they become a problem.

    Which is more preferential?

    How to make the differentiation between premature optimization vs informed decision making for a performance critical application before any implementation is done?


    Optimization is premature if:

  • Your application isn't doing anything time-critical. (Which means, if you're writing a program that adds up 500 numbers in a file, the word "optimization" shouldn't even pop into your brain, since all it'll do is waste your time.)

  • You're doing something time-critical in something other than assembly, and still worrying whether i++; i++; i++; i++; is faster or i += 2 ... if it's really that critical, you'd be working in assembly and not wasting time worrying about this. (Even then, this particular example most likely won't matter.)

  • You have a hunch that one thing might be a bit faster than the other, but you need to look it up. For example, if something is bugging you about whether StopWatch is faster or Environment.TickCount , it's premature optimization, since if the difference was bigger, you'd probably be more sure and wouldn't need to look it up.

  • If you have a guess that something might be slow but you're not too sure, just put a //NOTE: Performance? comment, and if you later run into bottlenecks, check such places in your code. I personally don't worry about optimizations that aren't too obvious; I just use a profiler later, if I need to.

    Another technique:

    I just run my program, randomly break into it with the debugger, and see where it stopped -- wherever it stops is likely a bottleneck, and the more often it stops there, the worse the bottleneck. It works almost like magic. :)


    This proverb does not (I believe) refer to optimizations that are built into a good design as it is created. It refers to tasks specifically targeted at performance, which otherwise would not be undertaken.

    This kind of optimization does not "become" premature, according to the common wisdom — it is guilty until proven innocent.


    Optimisation is the process of making existing code run more efficiently (faster speed, and/or less resource usage)

    All optimisation is premature if the programmer has not proven that it is necessary. (For example, by running the code to determine if it achieves the correct results in an acceptable timeframe. This could be as simple as running it to "see" if it runs fast enough, or running under a profiler to analyze it more carefully).

    There are several stages to programming something well:

    1) Design the solution and pick a good, efficient algorithm .

    2) Implement the solution in a maintainable, well coded manner.

    3) Test the solution and see if it meets your requirements on speed, RAM usage, etc. (eg "When the user clicks "Save", does it take less than 1 second?" If it takes 0.3s, you really don't need to spend a week optimising it to get that time down to 0.2s)

    4) IF it does not meet the requirements, consider why. In most cases this means go to step (1) to find a better algorithm now that you understand the problem better. (Writing a quick prototype is often a good way of exploring this cheaply)

    5) IF it still does not meet the requirements, start considering optimisations that may help speed up the runtime (for example, look-up tables, caching, etc). To drive this process, profiling is usually an important tool to help you locate the bottle-necks and inefficiences in the code, so you can make the greatest gain for the time you spend on the code.

    I should point out that an experienced programmer working on a reasonably familiar problem may be able to jump through the first steps mentally and then just apply a pattern, rather than physically going through this process every time, but this is simply a short cut that is gained through experience

    Thus, there are many "optimisations" that experienced programmers will build into their code automatically. These are not "premature optimisations" so much as "common-sense efficiency patterns". These patterns are quick and easy to implement, but vastly improve the efficiency of the code, and you don't need to do any special timing tests to work out whether or not they will be of benefit:

  • Not putting unnecessary code into loops. (Similar to the optimisation of removing unnecessary code from existing loops, but it doesn't involve writing the code twice!)
  • Storing intermediate results in variables rather than re-calculating things over and over.
  • Using look-up tables to provide precomputed values rather than calculating them on the fly.
  • Using appropriate-sized data structures (eg storing a percentage in a byte (8 bits) rather than a long (64 bits) will use 8 times less RAM)
  • Drawing a complex window background using a pre-drawn image rather than drawing lots of individual components
  • Applying compression to packets of data you intend to send over a low-speed connection to minimise the bandwidth usage.
  • Drawing images for your web page in a style that allows you to use a format that will get high quality and good compression.
  • And of course, although it's not technically an "optmisation", choosing the right algorithm in the first place!
  • For example, I just replaced an old piece of code in our project. My new code is not "optimised" in any way, but (unlike the original implementation) it was written with efficiency in mind. The result: Mine runs 25 times faster - simply by not being wasteful. Could I optimise it to make it faster? Yes, I could easily get another 2x speedup. Will I optimise my code to make it faster? No - a 5x speed improvement would have been sufficient, and I have already achieved 25x. Further work at this point would just be a waste of precious programming time. (But I can revisit the code in future if the requirements change)

    Finally, one last point: The area you are working in dictates the bar you must meet. If you are writing a graphics engine for a game or code for a real-time embedded controller, you may well find yourself doing a lot of optimisation. If you are writing a desktop application like a notepad, you may never need to optimise anything as long as you aren't overly wasteful.

    链接地址: http://www.djcxy.com/p/39636.html

    上一篇: Windows开发者机器规格

    下一篇: 何时优化过早?