When is optimisation premature?
As Knuth said,
We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil.
This is something which often comes up in Stack Overflow answers to questions like "which is the most efficient loop mechanism", "SQL optimisation techniques?" (and so on). The standard answer to these optimisation-tips questions is to profile your code and see if it's a problem first, and if it's not, then therefore your new technique is unneeded.
My question is, if a particular technique is different but not particularly obscure or obfuscated, can that really be considered a premature optimisation?
Here's a related article by Randall Hyde called The Fallacy of Premature Optimization.
Don Knuth started the literate programming movement because he believed that the most important function of computer code is to communicate the programmer's intent to a human reader . Any coding practice that makes your code harder to understand in the name of performance is a premature optimization.
Certain idioms that were introduced in the name of optimization have become so popular that everyone understands them and they have become expected, not premature. Examples include
Using pointer arithmetic instead of array notation in C, including the use of such idioms as
for (p = q; p < lim; p++)
Rebinding global variables to local variables in Lua, as in
local table, io, string, math
= table, io, string, math
Beyond such idioms, take shortcuts at your peril .
All optimization is premature unless
A program is too slow (many people forget this part).
You have a measurement (profile or similar) showing that the optimization could improve things .
(It's also permissible to optimize for memory.)
Direct answer to question:
EDIT : In response to comments, using quicksort instead of a simpler algorithm like insertion sort is another example of an idiom that everyone understands and expects . (Although if you write your own sort routine instead of using the library sort routine, one hopes you have a very good reason.)
IMHO, 90% of your optimization should occur at design stage, based on percieved current, and more importantly, future requirements. If you have to take out a profiler because your application doesn't scale to the required load you have left it too late, and IMO will waste a lot of time and effort while failing to correct the problem.
Typically the only optimizations that are worthwhile are those that gain you an order of magnitude performance improvement in terms of speed, or a multiplier in terms of storage or bandwidth. These types of optimizations typically relate to algorithm selection and storage strategy, and are extremely difficult to reverse into existing code. They may go as deep as influencing the decision on the language in which you implement your system.
So my advice, optimize early, based on your requirements, not your code, and look to the possible extended life of your app.
如果你还没有分析,那还为时过早。
链接地址: http://www.djcxy.com/p/1028.html上一篇: 什么是重写的System.Object.GetHashCode的最佳算法?
下一篇: 何时优化过早?