Asymptotic behavior of algorithms and Big O comparison

This question already has an answer here:

  • What is the difference between Θ(n) and O(n)? 9 answers

  • Consider the optimized bubble sort, which has an asymptotically tight upper bound of O(n2), and an asymptotically tight lower bound of Ω(n) (when the array is already sorted). The arrangement of items determines how many operations the algorithm will have to perform.

    Contrast that to summing a list of integers. In this case, you always have to look at each item in the list exactly once. The upper bound is O(n), and the lower bound is Ω(n). There is no arrangement of items that will change the complexity of the algorithm. In this case, when the tight upper and lower bounds are the same, we say that the algorithmic complexity is Θ(n).

    If an algorithm is Θ(n), then the complexity will never exceed O(n), and it will never be less than O(n). So it can't possibly be O(1) or Θ(1).

    But if an algorithm is described as O(n), it could be Ω(1). For example, finding the first non-zero value in a list of integers. If the first item in the list is non-zero, then you don't have to look at any other numbers. But if the list is all zeroes, you end up looking at them all. So the upper bound is O(n) and the lower bound is Ω(1).


  • If you know that a task requires exactly one week ( = Θ(1) ), you can surely say that you can do it in at most a year ( = O(n) ).

  • If you know that a task requires at most a year ( = O(n) ), you cannot be sure that you'll do it in a week ( = Θ(1) ).

  • If you know that a task requires exactly one year ( = Θ(n) ), you could also say that it requires at most a year ( = O(n) ), and you're sure it cannot be done in a week ( ≠ Θ(1) ).


  • The example basically tries to cover two ideas:

  • If an algorithm is Θ(f(n)) , it means it is both Ω(f(n)) and O(f(n)) . It is asymptotically neither better nor worse than f(n) , it has the same asymptotic behavior.
  • Functions that are O(1) can be seen as a subset of functions that are O(n) . This can be generalized but no need to get too formal here I guess to avoid mathematically incorrect disasters from my side. It means if it can never do worse than a constant, then it can never do worse than a linear function.
  • So the idea is to break up the Θ to O and Ω notations. And then identify which is subset of which.

    Also it's nice to notice that ANY algorithm (that has a non null complexity at least) is Ω(1) . An algorithm can never do better than a constant.

    Example Mammals are humans.

    The answer : no, not in general. All humans are mammals though. Humans are a subset of mammals.

    I tried to think of another example but it was either too technical or not clear enough. So I'll leave here this not so well drawn but rather clear graph here. I found by googling o omega theta and looking for images. There are a few other good images too.

    图形

    Basically, for each function f in this graph: Anything above it is Ω(f(n)) because it can never do better than f, it can never be below it as n increases; Anything below it is O(f(n)) because it can never be worse than f, it can never be above it as n increases.

    The graph doesn't show well the insignificance of constants asymptotically. There are other graphs that show it better. I just put it here because it had lots of functions at a time.

    链接地址: http://www.djcxy.com/p/40046.html

    上一篇: 如何使用Big O来计算增长率?

    下一篇: 算法的渐近行为和Big O比较