Order of magnitude using Big

This question already has an answer here:

  • Big O, how do you calculate/approximate it? 22 answers

  • These notations (big O, big omega, theta) simply say how does the algorithm will be "difficult" (or complex) asymptotically when things will get bigger and bigger.

    For big O, having two functions: f(x) and g(x) where f(x) = O(g(x)) then you can say that you are able to find one x from which g(x) will be always bigger than f(x). That is why the definition contains "asymptotically" because these two functions may have any run at the beginning (for example f(x) > g(x) for few first x) but from the single point, g(x) will get always superior (g(x) >= f(x)). So you are interested in behavior in a long run (not for small numbers only). Sometimes big-O notation is named upper bound because it describes the worst possible scenario (it will never be asymptotically more difficult that this function).

    That is the "mathematical" part. When it comes to practice you usually ask: How many times the algorithm will have to process something? How many operations will be done?

    For your simple loop, it is easy because as your N will grow, the complexity of algorithm will grow linearly (as simple linear function), so the complexity is O(N). For N=10 you will have to do 10 operations, for N=100 => 100 operations, for N=1000 => 1000 operations... So the growth is truly linear.

    I'll provide few another examples:

    for (int i = 0; i < N; i++) {
       if (i == randomNumber()) {
          // do something...
       }
    }
    

    Here it seems that the complexity will be lower because I added the condition to the loop, so we have possible chance the number of "doing something" operations will be lower. But we don't know how many times the condition will pass, it may happen it passes every time, so using big-O (the worst case) we again need to say that the complexity is O(N).

    Another example:

    for (int i = 0; i < N; i++) {
       for (int i = 0; i < N; i++) {
           // do something
       }
    }
    

    Here as N will be bigger and bigger, the # of operations will grow more rapidly. Having N=10 means that you will have to do 10x10 operations, having N=100 => 100x100 operations, having N=1000 => 1000x1000 operations. You can see the growth is no longer linear it is N x N, so we have O(N x N).

    For the last example I will use idea of full binary tree. Hope you know what binary tree is. So if you have simple reference to the root and you want to traverse it to the left-most leaf (from top to bottom), how many operations will you have to do if the tree has N nodes? The algorithm would be something similar to:

    Node actual = root;
    while(actual.left != null) {
       actual = actual.left
    }
    // in actual we have left-most leaf
    

    How many operations (how long loop will execute) will you have to do? Well that depends on the depth of the tree, right? And how is defined depth of full binary tree? It is something like log(N) - with base of logarithm = 2. So here, the complexity will be O(log(N)) - generally we don't care about the base of logarithm, what we care about is the function (linear, quadratic, logaritmic...)


    Your example is the order

    O(N)

    Where N=number of elements , and a comparable computation is performed on each, thus

    for (int i=0; i < N; i++) {
        // some process performed N times
    }
    

    The big-O notation is probably easier than you think; in all daily code you will find examples of O(N) in loops, list iterations, searches, and any other process that does work once per individual of a set. It is the abstraction that is first unfamiliar, O(N) meaning "some unit of work", repeated N times. This "something" can be a an incrementing counter, as in your example, or it can be lengthy and resource intensive computation. Most of the time in algorithm design the 'big-O', or complexity, is more important than the unit of work, this is especially relevant as N becomes large. The description 'limiting' or 'asymptotic' is mathematically significant, it means that an algorithm of lesser complexity will always beat one that is greater no matter how significant the unit of work, given that N is large enough, or "as N grows"

    Another example, to understand the general idea

    for (int i=0; i < N; i++) {
       for (int j=0; j < N; j++) {
        // process here NxN times
       }
    }
    

    Here the complexity is

    O(N2)

    For example, if N=10, then the second "algorithm" will take 10 times longer than the first, because 10x10 = 100 (= ten times larger). If you consider what will happen when N equals, say a million, or billion, you should be able to work out it will also take this much longer. So if you can find a way to do something in O(N) that a super-computer does in O(N2), you should be able to beat it with your old x386, pocket watch, or other old tool

    链接地址: http://www.djcxy.com/p/39350.html

    上一篇: 查找/计算方法的复杂性

    下一篇: 使用Big的数量级