Ω (Big

I was doing some reading on logarithms and the rate of growth of the running time of algorithms.

I have, however, a problem understanding the Big-Ω (Big-Omega) notation.

I know that we use it for 'asymptotic lower bounds', and that we can express the idea that an algorithm takes at least a certain amount of time.

Consider this example:

var a = [1,2,3,4,5,6,7,8,9,10];

Somebody chooses a number. I write a program that tries to guess this number using a linear search (1, 2, 3, 4... until it guesses the number).

I can say that the running time of the algorithm would be a function of the size of its input, so these are true ( n is the number of elements in an array):

  • Θ(n) (Big-Theta notation - asymptotically tight bound )
  • O(n) (Big-O notation - upper bounds)
  • When it comes to Big-Ω, in my understanding, the algorithm's running time would be Ω(1) since it is the least amount of guesses needed to find the chosen number (if, for example, a player chose 1 (the first item in the array)).

    I reckon this bacause this is the defintion I found on KhanAcademy:

    Sometimes, we want to say that an algorithm takes at least a certain amount of time , without providing an upper bound. We use big-Ω notation; that's the Greek letter "omega."

    Am I right to say that this algorithm's running time is Ω(1)? Is it also true that it is Ω(n), if yes - why ?


    Big O,Theta or Omega notation all refer to how a solution scales asymptotically as the size of the problem tends to infinity, however, they should really be prefaced with what you are measuring.

    Usually when one talks about big O(n) one usually means that the worst case complexity is O(n), however, one does sometimes see it used for typical running times, particularly for heuristics or algorithms which have an element of randomness or are not strictly guaranteed to converge at all.

    So here, we are presumably talking about the worst case complexity, which is Theta(n), since it is Theta(n), its also O(n) and Omega(n).

    One way to prove a lower bound when its unknown is to say that X is the easiest case for this algorithm, here the best case is O(1), so therefore we can say that the algorithm takes at lease Omega(1) and at most O(n), and Theta is unknown, and that is correct usage, but the aim is to get the highest possible bound for Omega which is still true, and the lowest possible bound for O(n) which is still true. Here Omega(n) is obvious, so its better to say Omega(n) than Omega(1), just as its better to say O(n) rather than O(n^2).

    链接地址: http://www.djcxy.com/p/39734.html

    上一篇: 为什么我需要在IEqualityComparer接口中使用GetHashcode()?

    下一篇: Ω(大