What is difference between different asymptotic notations?

I am really very confused in asymptotic notations. As far as I know, Big-O notation is for worst cast, omega is for best case and theta is for average case. However, I have always seen Big O being used everywhere, even for best case. For eg in the following link, see the table where time complexities of different sorting algorithms are mentioned-

https://en.wikipedia.org/wiki/Best,_worst_and_average_case

Everywhere in the table, big O notation is used irrespective of whether it is best case, worst case or average case. Then what is the use of other two notations and where do we use it?


As far as I know, Big-O notation is for worst cast, omega is for best case and theta is for average case.

They aren't. Omicron is for (asymptotic) upper bound, omega is for lower bound and theta is for tight bound, which is both an upper and a lower bound. If the lower and upper bound of an algorithm are different, then the complexity cannot be expressed with theta notation.

The concept of upper,lower,tight bound are orthogonal to the concept of best,average,worst case. You can analyze the upper bound of each case, and you can analyze different bounds of the worst case (and also any other combination of the above).

Asymptotic bounds are always in relation to the set of variables in the expression. For example, O(n) is in relation to n . The best, average and worst cases emerge from everything else but n . For example, if n is the number of elements, then the different cases might emerge from the order of the elements, or the number of unique elements, or the distribution of values.

However, I have always seen Big O being used everywhere, even for best case.

That's because the upper bound is almost always the one that is the most important and interesting when describing an algorithm . We rarely care about the lower bound. Just like we rarely care about the best case.

The lower bound is sometimes useful in describing a problem that has been proven to have a particular complexity. For example, it is proven that worst case complexity of all general comparison sorting algorithms is Ω(n log n) . If the sorting algorithm is also O(n log n) , then by definition, it is also Θ(n log n) .


O(...) basically means "not (much) slower than ...".
It can be used for all three cases ("the worst case is not slower than", "the best case is not slower than", and so on).

Omega is the oppsite: You can say, something can't be much faster than ... . Again, it can be used with all three cases. Compared to O(...) , it's not that important, because telling someone "I'm certain my program is not faster than yours" is nothing to be proud of.

Theta is a combination: It's "(more or less) as fast as" ..., not just slower/faster.


Big O is for upper bound, not for worst case! There is no notation specifically for worst case/best case. The examples you are talking about all have Big O because they are all upper bounded by the given value. I suggest you take another look at the book from which you learned the basics because this is immensely important to understand :)

EDIT: Answering your doubt- because usually, we are bothered with our at-most performance ie when we say, our algorithm performs in O(logn) in the best case-scenario, we know that its performance will not be worse than logarithmic time in the given scenario. It is the upper bound that we seek to reduce usually and hence we usually mention big O to compare algorithms. (not to say that we never mention the other two)

链接地址: http://www.djcxy.com/p/40182.html

上一篇: 基于连续项目的相似性的双面项目

下一篇: 不同渐近符号之间有什么区别?