What exactly is the difference between big oh and omega notation?

I know that big oh is for upper bound and omega is for lower bound but most of the places I see only big oh notation.

For eg. In linear search algorithm, the worst case is big oh(n). However, searched no. can be found on very first place. so, no. of inputs is 1. Hence, it's the best case scenario. so, we can write it as big omega(1). But I have seen, at many places big oh is used for best case scenario too but I don't know why.

I know the theoretical difference between both the notations. However, I am unable to understand it practically.


There are three important complexity classes:

  • O, Big-O = a loose upper bound
  • Ω, Omega = a loose lower bound
  • Θ, Theta = the strict upper and lower bound
  • It's important to realize that O and Ω are loose bounds. They aren't necessarily optimal. There are an infinite number of upper bounds and an infinite number of lower bounds for any algorithm.

    For example, linear search can be said to have an upper bound of O(n2) and a lower bound of Ω(1). We know that it's actual complexity lies somewhere between these bounds. Usually, though, it's not usually helpful to state non-optimal bounds. These bounds are correct but not useful.

    What we usually want is to know the smallest upper bound and the largest lower bound. If you say linear search is O(n2), I can improve your answer and say not only is it O(n2), it's also O(n). This is a smaller upper bound, and hence more useful.

    Not only this, but you have to decide if you're looking at the worst case , average case , or best case . Unless stated otherwise, complexity analysis is typically looking at the worst case. So while you might stumble upon the item you're looking for in 1 step, the worst case for linear search is having to look at all n items. The best case complexity is O(1) but the worst case is O(n).

    If you can prove that O = Ω—that the smallest upper bound and the largest lower bound are the same—then you now know Θ. Linear search, in the worst case, is bounded by O(n) on top and Ω(n) on the bottom, and hence its worst case complexity is exactly Θ(n).

    Most people who say an algorithm is O(whatever) should really be using Θ(whatever) instead. If you know that the bound is a strict upper and lower bound then Θ is better than O. But most people also don't remember these intricacies, so Big-O has become a lazy fallback for Theta.


    I know the theoretical difference between both the notations. However, I am unable to understand it practically.

    Big O is more widely used when describing algorithms because it's generally more useful to know how badly an algorithm will run so engineers can prepare for any situation.

    Big Omega, however, generally isn't as useful as Big O. While it's nice to know how fast an algorithm can run, when designing software engineers need to know worst case scenarios, again, to prepare for any situation.


    When people say the running time is O( f(n) ), what they mean is that the running time belongs to the class of functions that are O( f(n) ).

    ie, That the f(n) <= cn for some c and every n >=0

    In your specific question of linear search, The worst case will indeed take some time which is O(N).

    The best case will take constant time - T = k . but k belongs to the class O(1) ( since k < Epsilon + k for some +ve Epsilon ). So i could say, in the best case, the running time is O(1).

    That being said, People use O( f(N) ) loosely to mean it's 'somewhat proportional' to f(N).

    链接地址: http://www.djcxy.com/p/40180.html

    上一篇: 不同渐近符号之间有什么区别?

    下一篇: 大哦和欧米茄符号之间的区别究竟是什么?