Python: a could be rounded to b in the general case
As a part of some unit testing code that I'm writing, I wrote the following function. The purpose of which is to determine if 'a' could be rounded to 'b', regardless of how accurate 'a' or 'b' are.
def couldRoundTo(a,b):
"""Can you round a to some number of digits, such that it equals b?"""
roundEnd = len(str(b))
if a == b:
return True
for x in range(0,roundEnd):
if round(a,x) == b:
return True
return False
Here's some output from the function:
>>> couldRoundTo(3.934567892987, 3.9)
True
>>> couldRoundTo(3.934567892987, 3.3)
False
>>> couldRoundTo(3.934567892987, 3.93)
True
>>> couldRoundTo(3.934567892987, 3.94)
False
As far as I can tell, it works. However, I'm scared of relying on it considering I don't have a perfect grasp of issues concerning floating point accuracy. Could someone tell me if this is an appropriate way to implement this function? If not, how could I improve it?
Could someone tell me if this is an appropriate way to implement this function?
It depends. The given function will behave surprisingly if b
isn't precisely equal to a value that would normally be obtained directly from decimal-to-binary-float conversion.
For example:
>>> print(0.1, 0.2/2, 0.3/3)
0.1 0.1 0.1
>>> couldRoundTo(0.123, 0.1)
True
>>> couldRoundTo(0.123, 0.2/2)
True
>>> couldRoundTo(0.123, 0.3/3)
False
This fails because the calculation of 0.3 / 3
results in a slightly different representation than 0.1
and 0.2 / 2
(and round(0.123, 1)
).
If not, how could I improve it?
Rule of thumb: if your calculation specifically involves decimal digits in any way, just use Decimal
, to avoid all the lossy base-2 round-tripping.
In particular, Decimal
includes a helper called quantize
that makes this problem trivially easy:
from decimal import Decimal
def roundable(a, b):
a = Decimal(str(a))
b = Decimal(str(b))
return a.quantize(b) == b
One way to do it:
def could_round_to(a, b):
(x, y) = map(len, str(b).split('.'))
round_format = "%" + "%d.%df"%(x, y)
return round_format%a == str(b)
First, we take the number of digits before and after the decimal in x and y. Then, we construct a format such as %x.yf
. Then, we supply a
to the format string.
>>> "%2.2f"%123.1234
'123.12'
>>> "%2.2f"%123.1264
'123.13'
>>> "%3.2f"%000.001
'0.00'
Now, all that's left is comparing the strings.
The only point that I'm afraid of is the conversion from strings to floating points when interpreting floating-point literals (as in http://docs.python.org/reference/lexical_analysis.html#floating-point-literals). I don't know if there is any guarantee that a floating-point literal will evaluate to the floating-point number that is closest to the given string. This mentioned section is the place in the specification where I would expect such a guarantee.
For example, Java is much more specific about what to expect from a string literal. From the documentation of Double.valueOf(String):
[...] [the argument] is regarded as representing an exact decimal value in the usual "computerized scientific notation" or as an exact hexadecimal value; this exact numerical value is then conceptually converted to an "infinitely precise" binary value that is then rounded to type double by the usual round-to-nearest rule of IEEE 754 floating-point arithmetic [...]
Unless you can find such a guarantee anywhere in the Python documentation, you can be just lucky, because some earlier floating-point libraries (on which Python might rely) convert a string just to a floating-point number nearby, not to the best available.
Unfortunately, it seems to me that neither round
, nor float
, nor the specification for floating-point literaly give you any usable guarantee.
上一篇: 这种语言的正确语法是什么?