for anew beginner

Possible Duplicate:
Plain English explanation of Big O

I was recently asked about my knowledge of how to use Big O notation and I was stumped because I had never come across Big O before. I have read the Wikipedia page about Big O and looked at some of the questions posted in Stackoverflow but I just simply don't understand.

My question: Can someone provide an explanation of Big O in the simplest form and provide an example of how to use it in the following Java method:

public int getScore(int[] dice)
{
    int[][] dups;

    dups = possibleDups(dice);

    // Set catScore
    for (int[] i : dups)
    {
        for (int k = 0; k < i.length; k++)
        {
            if (i[k] > 0)
            {
                switch (i[k]) {
                case 1:
                    catScore = Category.ONES;
                    break;
                case 2:
                    catScore = Category.TWOS;
                    break;
                case 3:
                    catScore = Category.THREES;
                    break;
                case 4:
                    catScore = Category.FOURS;
                    break;
                case 5:
                    catScore = Category.FIVES;
                    break;
                case 6:
                    catScore = Category.SIXES;
                    break;
                case 7:
                    catScore = Category.SEVENS;
                    break;
                case 8:
                    catScore = Category.EIGHTS;
                    break;
                default:
                    catScore = Category.NONE;
                    break;
                }
            }
        }
    }

    return sumAll(dice);
}

Big O is the worst case scenario for an algorithm to execute. You should see how does you loop depend on the inner loop. Sample:

public void doSomething(int n){
    for(int i=0;i<n;i++)
        for(int j=0;j<n;j++)
}

Worst case is 100 times iterate. Change n to 20 and then worst case is 400 iteration.

This is O(n^2).


Big O notation details the proportional difference in the time of the solution compared to the number of items in a collection. It actually says nothing about how long the solution takes to solve the issue, but it details how quickly the time to solve a solution grows when you know the time for a fixed point and how many other items you are likely to add.

So, if it always takes 5 minutes to make a coffee, it's not enough information to calculate a Big O solution, but if it takes 5 minutes to make a coffee, and five minutes to make ten coffees, and five minutes to make a million coffees, then it is O(1), where the 1 indicates one unit of time.

Now if you have a single cup coffee maker, where it takes roughly two minutes to make a cup of coffee, four minutes to make two cups of coffee, and twenty minutes to make ten cups of coffee, then the time to make a number of coffees is proportional to the number of cups, making the Big O notation O(x), meaning you need X (one for each coffee) units of time.

Other Big O notations are common, O(x^2) O(xlog(x)), etc. They all describe the proportional rate of time increase based on the number of elements in consideration.

Note that O(1) might be slower than an O(x) solution for some small collection of items, as we are talking about units of time, not actual times. So the unit of time in a particular O(1) might be an hour, while the particular unit of time in an O(x) solution might be ten minutes. In such a case, the O(x) solution might be faster until you need to process six or more items. In the long term, big O terms with lower powers (like O(1)) will always outperform those with higher powers O(x) no matter how large or small the actual time units are.


In computer science, a large area of interest is "how long do things take to run?". The answer a lot of the time is "it depends on the size of the input (but it might not)". Big-Oh specifically is a function that describes the upper bound on how long an algorithm with run. There are some mathematical details there (limits, asymptotes, etc), but thats a really basic view.

In your example, you loop over a list, and then for everything in that list, you loop over another list. Therefore, the time your algorithm takes to run is directly proportional to the size of the lists. If you think of a list as having 'n' things in it, and your second list as having m things in it your alogrithm's running time is O(m*n). If m ~ n, then it is also accurate to say O(n^2).

For maps, lookups are constant time (assume they are). In that case, the running time of the Map lookup is therefor O(c) which is the same as O(1).

链接地址: http://www.djcxy.com/p/39686.html

上一篇: 为什么这个算法的空间复杂度是O(1)

下一篇: 为新的初学者