what is order of complexity in Big O notation?

Question

Hi I am trying to understand what order of complexity in terms of Big O notation is. I have read many articles and am yet to find anything explaining exactly 'order of complexity', even on the useful descriptions of Big O on here.

What I already understand about big O

The part which I already understand. about Big O notation is that we are measuring the time and space complexity of an algorithm in terms of the growth of input size n. I also understand that certain sorting methods have best, worst and average scenarios for Big O such as O(n) ,O(n^2) etc and the n is input size (number of elements to be sorted).

Any simple definitions or examples would be greatly appreciated thanks.


Big O is about finding an upper limit for the growth of some function. See the formal definition on Wikipedia http://en.wikipedia.org/wiki/Big_O_notation

So if you've got an algorithm that sorts an array of size n and it requires only a constant amount of extra space and it takes (for example) 2 n² + n steps to complete, then you would say it's space complexity is O(n) or O(1) (depending on wether you count the size of the input array or not) and it's time complexity is O(n²) .

Knowing only those O numbers, you could roughly determine how much more space and time is needed to go from n to n + 100 or 2 n or whatever you are interested in. That is how well an algorithm "scales".

Update

Big O and complexity are really just two terms for the same thing. You can say "linear complexity" instead of O(n) , quadratic complexity instead of O(n²) , etc...


Big-O analysis is a form of runtime analysis that measures the efficiency of an algorithm in terms of the time it takes for the algorithm to run as a function of the input size. It's not a formal bench- mark, just a simple way to classify algorithms by relative efficiency when dealing with very large input sizes.

Update: The fastest-possible running time for any runtime analysis is O(1), commonly referred to as constant running time.An algorithm with constant running time always takes the same amount of time to execute, regardless of the input size.This is the ideal run time for an algorithm, but it's rarely achievable. The performance of most algorithms depends on n, the size of the input.The algorithms can be classified as follows from best-to-worse performance:

O(log n) — An algorithm is said to be logarithmic if its running time increases logarithmically in proportion to the input size.

O(n) — A linear algorithm's running time increases in direct proportion to the input size.

O(n log n) — A superlinear algorithm is midway between a linear algorithm and a polynomial algorithm.

O(n^c) — A polynomial algorithm grows quickly based on the size of the input.

O(c^n) — An exponential algorithm grows even faster than a polynomial algorithm.

O(n!) — A factorial algorithm grows the fastest and becomes quickly unusable for even small values of n.

The run times of different orders of algorithms separate rapidly as n gets larger.Consider the run time for each of these algorithm classes with

   n = 10:
   log 10 = 1
   10 = 10
   10 log 10 = 10
   10^2 = 100
   2^10= 1,024
   10! = 3,628,800
   Now double it to n = 20:
   log 20 = 1.30
   20 = 20
   20 log 20= 26.02 
   20^2 = 400
   2^20 = 1,048,576 
   20! = 2.43×1018

Finding an algorithm that works in superlinear time or better can make a huge difference in how well an application performs.


Say, f(n) in O(g(n)) if and only if there exists a C and n0 such that f(n) < C*g(n) for all n greater than n0.

Now that's a rather mathematical approach. So I'll give some examples. The simplest case is O(1). This means "constant". So no matter how large the input (n) of a program, it will take the same time to finish. An example of a constant program is one that takes a list of integers, and returns the first one. No matter how long the list is, you can just take the first and return it right away.

The next is linear, O(n). This means that if the input size of your program doubles, so will your execution time. An example of a linear program is the sum of a list of integers. You'll have to look at each integer once. So if the input is an list of size n, you'll have to look at n integers.

An intuitive definition could define the order of your program as the relation between the input size and the execution time.

链接地址: http://www.djcxy.com/p/39696.html

上一篇: 违反Big中给定的平均时间复杂度

下一篇: 大O符号的复杂性排序是什么?