Is a double really unsuitable for money?

I always tell in c# a variable of type double is not suitable for money. All weird things could happen. But I can't seem to create an example to demonstrate some of these issues. Can anyone provide such an example?

(edit; this post was originally tagged C#; some replies refer to specific details of decimal , which therefore means System.Decimal ).

(edit 2: I was specific asking for some c# code, so I don't think this is language agnostic only)


Very, very unsuitable. Use decimal.

double x = 3.65, y = 0.05, z = 3.7;
Console.WriteLine((x + y) == z); // false

(example from Jon's page here - recommended reading ;-p)


You will get odd errors effectively caused by rounding. In addition, comparisons with exact values are extremely tricky - you usually need to apply some sort of epsilon to check for the actual value being "near" a particular one.

Here's a concrete example:

using System;

class Test
{
    static void Main()
    {
        double x = 0.1;
        double y = x + x + x;
        Console.WriteLine(y == 0.3); // Prints False
    }
}

Yes it's unsuitable.

If I remember correctly double has about 17 significant numbers, so normally rounding errors will take place far behind the decimal point. Most financial software uses 4 decimals behind the decimal point, that leaves 13 decimals to work with so the maximum number you can work with for single operations is still very much higher than the USA national debt. But rounding errors will add up over time. If your software runs for a long time you'll eventually start losing cents. Certain operations will make this worse. For example adding large amounts to small amounts will cause a significant loss of precision.

You need fixed point datatypes for money operations, most people don't mind if you lose a cent here and there but accountants aren't like most people..

edit
According to this site http://msdn.microsoft.com/en-us/library/678hzkk9.aspx Doubles actually have 15 to 16 significant digits instead of 17.

@Jon Skeet decimal is more suitable than double because of its higher precision, 28 or 29 significant decimals. That means less chance of accumulated rounding errors becoming significant. Fixed point datatypes (ie integers that represent cents or 100th of a cent like I've seen used) like Boojum mentions are actually better suited.

链接地址: http://www.djcxy.com/p/21372.html

上一篇: Math.round(num)与num.toFixed(0)和浏览器不一致

下一篇: 双倍真的不适合金钱吗?