Understanding Amortized Time and why array inserts are O(1)
I am reading Cracking the Coding Interview and in the Big O chapter, there is an explanation on Amortized Time. The classic example of something such asn ArrayList needing to grow is used here. When an array needs to grow, insertion will take O(N)
time assuming it has to copy N elements to the new Array. This is fine.
What I don't understand is that as the array is doubled in capacity, why would an amortized time for each insertion be O(1)
From everything I understand, anytime you insert into an array, it is always an O(N)
operation. How is it different for Amortized Time? I'm sure the text is correct, I'm just not grokking the O(1)
amortized time concept.
I am answering the question that you seem confused about, rather than what you officially asked.
Your real question is how appending can be a O(1)
operation. If space is already allocated for the next element of the array, then appending is just updating the record of how many elements are in, and copying the entry. This is a O(1)
operation.
Appending only is expensive if you overflow available space. Then you have to allocate a larger region, move the whole array, and delete the previous. This is a O(n)
operation. But if we're only doing it every O(1/n)
times, then on average it can still come out to O(n * 1/n) = O(1)
.
Whether or not the average matters depends on your task. If you're controlling heavy machinery, spending too long on an individual operation can mean you don't get back to the rotating blade soon enough, which can be Officially Bad. If you're generating a web page then all that matters is the total time taken for a sequence of operations, and that will be the number of operations times the average time each.
For most programmers, the average is what matters.
链接地址: http://www.djcxy.com/p/39658.html上一篇: 大O符号计算机科学中的对数?
下一篇: 了解摊销时间和为什么数组插入是O(1)