Logarithms in Computer Science for Big O Notation?

I have always had this question in my head, and have never been able to connect these two concepts so I am looking for some help in understanding Logarithms in Computer Science with respect to Big-O notation and algorithmic time complexity. I understand logarithms as a math concept as being able to answer the question, "what number do I need to raise this base to exponentially to get X?". For example, log2(16) tells us that we need to raise 2 to the 4th power to get 16. I also have a memorization-level understanding that O(log n) algorithms are faster than O(n) and other slower algorithms such as those that are exponential and that an example of an O(log n) algorithm is searching a balanced binary search tree.

My question is a little hard to state exactly, but I think it boils down to why is searching a balanced BST logarithmic and what makes it logarithmic and how do I relate mathematical logarithms with the CS use of the term? And a follow-up question would be what is the difference between O(n log n) and O(log n)?

I know that is not the clearest question in the world, but if someone could help me connect these two concepts it would clear up a lot of confusion for me and take me past the point of just memorization (which I generally hate).


When you are calculating Big O notation, you are calculating the complexity of an algorithm as the problem size grows.

For example, when performing a linear search of a list, the worst possible case is that the element is either in the last index, or not in the list at all, meaning your search will perform N steps, with N being the number of elements in the list. O(N).

An algorithm that will always take the same amount of steps to complete regardless of problem size is O(1).

Logarithms come into play when you are cutting the problem size as you move through an algorithm. For a BST, you start in the middle of a list. If the element to search for is smaller, you only focus on the first half of the list. If it is larger, you only focus on the second half. After only one step, you just cut your problem size in half. You continue cutting the list in half until you either find the element or can not proceed. (Note that a binary search assumes the list is in order)

Let's consider we are looking for 0 in the list below (A BST is represented as an ordered list): [0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15]

We first start in the middle: 7 0 is less than 7 so we look in the first half of the list: [0,1,2,3,4,5,6]

We look in the middle of this list: 3 0 is less than 3 and our working list is now: [0,1,2]

So we look at 1. 0 is less than 1, so our list is now [0].

Given we have a working list of just 1 element, we are at the worst case. We either found the element, or it does not exist in the list. We were able to determine this in just four steps, looking at 7,3,1, and 0.

The problem size is 16 (number of elements in the list), which we represent as N. In the worst case, we perform 4 comparisons (2^4 = 16 OR Log base 2 of 16 is 4)).

If we took a look at a problem size of 32, we would perform only 5 comparisons (2^5 = 32 OR Log base 2 of 32 is 5).

Therefor, the Big O for a BST is O(logN) (note that we use a base 2 for logarithms in CS).

For O(NlogN), the worst case is the problem size times the calculation of it's logarithm. Insertion sort, quick sort, and merge sort are all examples of O(NlogN)


In computer science, the big O notation indicates how fast the number of operations of an algorithm increases with a given parameter n of the requested problem statement. In a balanced binary search tree, n can be number of nodes in the tree. As you search through the tree, the algorithm needs to take a decision at each depth level of the tree. Since the number of nodes doubles at each level, the number of node in the tree n=2^d-1, where d is the depth of the tree. It is thus relatively intuitive that the number of decision that the algorithm takes is d-1 = log_{2}(n+1)-1. This shows that the complexity of the algorithm is of the order O(log(n)), which means that the number of operations is grows like log(n). As a function, log grows slower than n, that is as n becomes large log(n) is smaller than n, so an algorithm that is of time complexity O(log(n)) will be faster than one with complexity O(n), which is itself faster than O(n log(n)).


You have 2^n number of leafs on your BST. The variable “n” is the hight of the tree, or puted in another way, how many times branching. When you search, you check at each time the tree branching. So you have logarithmic time. (Logarithm function is inverse of exponent function)

链接地址: http://www.djcxy.com/p/39660.html

上一篇: 计算复杂性理论在现实生活中应用了吗?

下一篇: 大O符号计算机科学中的对数?