Why does TimeSpan.FromSeconds(double) round to milliseconds?
TimeSpan.FromSeconds
takes a double, and can represent values down to 100 nanoseconds, however this method inexplicably rounds the time to whole milliseconds.
Given that I've just spent half an hour to pinpoint this (documented!) behaviour, knowing why this might be the case would make it easier to put up with the wasted time.
Can anyone suggest why this seemingly counter-productive behaviour is implemented?
TimeSpan.FromSeconds(0.12345678).TotalSeconds
// 0.123
TimeSpan.FromTicks((long)(TimeSpan.TicksPerSecond * 0.12345678)).TotalSeconds
// 0.1234567
As you've found out yourself, it's a documented feature. It's described in the documentation of TimeSpan:
Parameters
value Type: System.Double
A number of seconds, accurate to the nearest millisecond .
The reason for this is probably because a double is not that accurate at all. It is always a good idea to do some rounding when comparing doubles, because it might just be a very tiny bit larger or smaller than you'd expect. That behaviour could actually provide you with some unexpected nanoseconds when you try to put in whole milliseconds. I think that is the reason they chose to round the value to whole milliseconds and discard the smaller digits.
On the rights of a speculation..
TimeSpan.MaxValue.TotalMilliseconds
is equat to 922337203685477. The number that has 15 digits. double
is precise to 15 digits. TimeSpan.FromSeconds
, TimeSpan.FromMinutes
etc. all go through conversion to milliseconds expressed in double
(then to ticks then to TimeSpan
which is not interesting now) So, when you are creating TimeSpan
that will be close to TimeSpan.MaxValue
(or MinValue
) the conversion will be precise to milliseconds only.
So the probable answer to the question "why" is: to have the same precision all the times .
Further thing to think about is whether the job could have been done better if conversions were done through firstly converting value to ticks expressed in long
.
Imagine you're the developer responsible for designing the TimeSpan
type. You've got all the basic functionality in place; it all seems to be working great. Then one day some beta tester comes along and shows you this code:
double x = 100000000000000;
double y = 0.5;
TimeSpan t1 = TimeSpan.FromMilliseconds(x + y);
TimeSpan t2 = TimeSpan.FromMilliseconds(x) + TimeSpan.FromMilliseconds(y);
Console.WriteLine(t1 == t2);
Why does that output False
? the tester asks you. Even though you understand why this happened (the loss of precision in adding together x
and y
), you have to admit it does seem a bit strange from a client perspective. Then he throws this one at you:
x = 10.0;
y = 0.5;
t1 = TimeSpan.FromMilliseconds(x + y);
t2 = TimeSpan.FromMilliseconds(x) + TimeSpan.FromMilliseconds(y);
Console.WriteLine(t1 == t2);
That one outputs True
! The tester is understandably skeptical.
At this point you have a decision to make. Either you can allow an arithmetic operation between TimeSpan
values that have been constructed from double
values to yield a result whose precision exceeds the accuracy of the double
type itself —eg, 100000000000000.5 (16 significant figures)—or you can, you know, not allow that.
So you decide, you know what, I'll just make it so that any method that uses a double
to construct a TimeSpan
will be rounded to the nearest millisecond. That way, it is explicitly documented that converting from a double
to a TimeSpan
is a lossy operation, absolving me in cases where a client sees weird behavior like this after converting from double
to TimeSpan
and hoping for an accurate result.
I'm not necessarily arguing that this is the "right" decision here; clearly, this approach causes some confusion on its own. I'm just saying that a decision needed to be made one way or the other, and this is what was apparently decided.
链接地址: http://www.djcxy.com/p/52810.html上一篇: LinqToSql查询中的条件快捷方式