Practical difference between O(n) and O(1 + n)?
Isn't O(n) an improvement over O(1 + n)?
This is my conception of the difference:
O(n):
for i=0 to n do ; print i ;
O(1 + n):
a = 1;
for i=0 to n do ; print i+a ;
... which would just reduce to O(n) right?
If the target time complexity is O(1 + n), but I have a solution in O(n), does this mean I'm doing something wrong?
Thanks.
O(1+n) and O(n) are mathematically identical, as you can straightforwardly prove from the formal definition or using the standard rule that O( a(n) + b(n) ) is equal to the bigger of O(a(n)) and O(b(n)).
In practice, of course, if you do n+1 things it'll (usually, dependent on compiler optimizations/etc) take longer than if you only do n things. But big-O notation is the wrong tool to talk about those differences, because it explicitly throws away differences like that.
It's not an improvement because BigO
doesn't describe the exact running time of your algorithm but rather its growth rate . BigO therefore describes a class of functions, not a single function. O(n^2)
doesn't mean that your algorithms for input of size 2
will run in 4
operations, it means that if you were to plot the running time of your application as a function of n
it would be asymptotically upper bound by c*n^2
starting at some n0
. This is nice because we know how much slower our algorithm will be for each input size, but we don't really know exactly how fast it will be. Why use the c
? Because as I said we don't care about exact numbers but more about the shape of the function - when we multiply by a constant factor the shape stays the same.
Isn't O(n) an improvement over O(1 + n)?
No, it is not. Asymptotically these two are identical. In fact, O(n) is identical to O(n+k) where k
is any constant value.
上一篇: 如何