Why is the time complexity O(log n)?
This question already has an answer here:
If you write n
as a binary number, say 11011011
, the successive values it takes before the loop exits are
11011011
1101101
110110
11011
1101
110
11
1
Therefore, the number of iterations (final value of k
) equals the initial number of bits of n
. From
2^(k-1) ≤ n < 2^k
which is the condition to have k
bits, you draw
k-1 ≤ log2(n) < k.
The reason is that the operations in the loop, division and addition, are O(1). Now, consider the number of iterations for n = 2^m, for some m, and compare this number with log_2(n). Generalising to n, which are not powers of 2 is easy. Denoting by i(n) the number of iterations of the loop for input n, and taking for 2^m < n < 2^(m+1), note that i(n) = i(2^m) (thanks, integer division).
You repeatedly divide n by two until you reach 1 or less. Therefore your stop condition is (roughly) expressed by the equation n/2^k = 1 <=> n = 2^k <=> log_2 n = k ( / is an integer division in your code). Roughness is allowed because we are dealing with O( ) and constants will disappear anyway.
链接地址: http://www.djcxy.com/p/40024.html下一篇: 为什么时间复杂度为O(log n)?