What exactly does O(n) space complexity mean and how inefficient is it?
I have a high level understanding of what O(n) means in space. It means something along the lines of that for an algorithm with input n, the additional storage in memory allocated by this algorithm will increase proportionally to n.
So if you had an algorithm that took as input a number n, and created an array of size 2n and filled it will all 0s, the time complexity would be O(n) and the space complexity would be O(n) because you are creating an array(additional storage) that is relative to the size of the input. Is this understanding correct?
Secondly, how bad is a space complexity of O(n)? The popular sorting algorithms like quick sort have worst case space complexity of O(n), so for sorting arbitrarily long data, is it possible that the O(n) space complexity could have dire effects? And if so, is there any intuition as to why or how?
N in big O notation usually means the size of the input, not the value passed in to the algorithm.
Space complexity of O(n) means that for each input element there may be up to a fixed number of k bytes allocated, ie the amount of memory needed to run the algorithm grows no faster than linearly at k*N.
For example, if a sorting algorithm allocates a temporary array of N/2 elements, the algorithm is said to have an O(n) space complexity.
It is not possible to say if it is good or bad without some context. In many cases the space complexity of O(N) is acceptable, but there are exceptions to the rule. Sometimes you increase memory complexity to reduce time complexity (ie pay with memory for a significant speedup). This is almost universally considered a good tradeoff.
O(n) means that the cost increases with the number of input elements at a rate that is linear rather than exponential (eg O(n^2)) or logarithmic (eg O(log(2))), which in these cases is for example the number of elements in a key table. O(n) is bad if you need your algorithm to work efficiently for large values of n, and especially if there is an alternative you could use which scales less than linear proportionately (eg O(log(n))).
Time complexity and space complexity are different problems.
Space complexity is only a big problem if for possible values of n you will end up using a problematic amount of memory or storage. O(n) for storage may be expected in many cases, since in order to achieve less than O(n) for some things, you'd need to compress your data, and/or your data might have duplicates. For one basic example, if you have a key/value function where values are large but often duplicates, it might be inefficient to duplicate the value for each key, which would be O(n), so storing in a map rather than an array might be a lot more efficient for space.
Time complexity of worst-case O(n) being bad is often said in the case of index lookup algorithms because O(n) would mean you might have to look at every element in the index to find the one you're looking for. Ie the algorithm is not much better than just going through the whole list until you find a match. It's inefficient compared to various long-known tree indexing structures, which do take O(n) time - that is, the time to look up something does not increase in linear proportion to the number of elements in the index, because the tree structure reduces the number of needed comparisons to an exponentially shallower curve.
Some other types of algorithm may not have known solutions better than O(n), such as fields of AI agents which all potentially interact with each other.
链接地址: http://www.djcxy.com/p/6814.html上一篇: 理解一个线性运行时实现的字谜检测