O notation? How do you come up with figures like O(n)?

Possible Duplicate:
Plain english explanation of Big O

I'd imagine this is probably something taught in classes, but as I a self-taught programmer, I've only seen it rarely.

I've gathered it is something to do with the time, and O(1) is the best, while stuff like O(n^n) is very bad, but could someone point me to a basic explanation of what it actually represents, and where these numbers come from?


Big O refers to the worst case run-time order. It is used to show how well an algorithm scales based on the size of the data set (n->number of items).

Since we are only concerned with the order, constant multipliers are ignored, and any terms which increase less quickly than the dominant term are also removed. Some examples:

A single operation or set of operations is O(1), since it takes some constant time (does not vary based on data set size).

A loop is O(n). Each element in the data set is looped over.

A nested loop is O(n^2). A nested nested loop is O(n^3), and onward.

Things like binary tree searching are log(n), which is more difficult to show, but at every level in the tree, the possible number of solutions is halved, so the number of levels is log(n) (provided the tree is balanced).

Something like finding the sum of a set of numbers that is closest to a given value is O(n!), since the sum of each subset needs to be calculated. This is very bad.


It's a way of expressing time complexity.

O(n) means for n elements in a list, it takes n computations to sort the list. Which isn't bad at all. Each increase in n increases time complexity linearly.

O(n^n) is bad, because the amount of computation required to perform a sort (or whatever you are doing) will exponentially increase as you increase n .

O(1) is the best, as it means 1 computation to perform a function, think of hash tables, looking up a value in a hash table has O(1) time complexity.


Big O notation as applied to an algorithm refers to how the run time of the algorithm depends on the amount of input data. For example, a sorting algorithm will take longer to sort a large data set than a small data set. If for the sorting algorithm example you graph the run time (vertical-axis) vs the number of values to sort (horizontal-axis), for numbers of values from zero to a large number, the nature of the line or curve that results will depend on the sorting algorithm used. Big O notation is a shorthand method for describing the line or curve.

In big O notation, the expression in the brackets is the function that is graphed. If a variable (say n) is included in the expression, this variable refers to the size of the input data set. You say O(1) is the best. This is true because the graph f(n) = 1 does not vary with n. An O(1) algorithm takes the same amount of time to complete regardless of the size of the input data set. By contrast, the run time of an algorithm of O(n^n) increases with the square of the size of the input data set.

That is the basic idea, for a detailed explanation, consult the wikipedia page titled 'Big O Notation'.

链接地址: http://www.djcxy.com/p/6760.html

上一篇: 算法性能说明例如:O(n)

下一篇: O符号? 你如何提出像O(n)这样的数字?