A fast method to round a double to a 32
When reading Lua's source code, I noticed that Lua uses a macro
to round a double
to a 32-bit int
. I extracted the macro
, and it looks like this:
union i_cast {double d; int i[2]};
#define double2int(i, d, t)
{volatile union i_cast u; u.d = (d) + 6755399441055744.0;
(i) = (t)u.i[ENDIANLOC];}
Here ENDIANLOC
is defined as endianness, 0
for little endian, 1
for big endian. Lua carefully handles endianness. t
stands for the integer type, like int
or unsigned int
.
I did a little research and there's a simpler format of macro
that uses the same thought:
#define double2int(i, d)
{double t = ((d) + 6755399441055744.0); i = *((int *)(&t));}
Or in a C++-style:
inline int double2int(double d)
{
d += 6755399441055744.0;
return reinterpret_cast<int&>(d);
}
This trick can work on any machine using IEEE 754 (which means pretty much every machine today). It works for both positive and negative numbers, and the rounding follows Banker's Rule. (This is not suprising, since it follows IEEE 754.)
I wrote a little program to test it:
int main()
{
double d = -12345678.9;
int i;
double2int(i, d)
printf("%dn", i);
return 0;
}
And it outputs -12345679, as expected.
I would like to get into detail how this tricky macro
works. The magic number 6755399441055744.0
is actually 2^51 + 2^52
, or 1.5 * 2^52
, and 1.5
in binary can be represented as 1.1
. When any 32-bit integer is added to this magic number, well, I'm lost from here. How does this trick work?
PS: This is in Lua source code, Llimits.h.
UPDATE :
int
, it can also be expanded to a 64-bit int
as long as the number is in the range of 2^52. (The macro
needs some modification.) When working with Microsoft assembler for x86, there's an even faster macro
written in assembly
(this is also extracted from Lua source):
#define double2int(i,n) __asm {__asm fld n __asm fistp i}
There is a similar magic number for single precision number: 1.5 * 2 ^23
A double
is represented like this:
and it can be seen as two 32-bit integers; now, the int
taken in all the versions of your code (supposing it's a 32-bit int
) is the one on the right in the figure, so what you are doing in the end is just taking the lowest 32 bits of mantissa.
Now, to the magic number; as you correctly stated, 6755399441055744 is 2^51 + 2^52; adding such a number forces the double
to go into the "sweet range" between 2^52 and 2^53, which, as explained by Wikipedia here, has an interesting property:
Between 252=4,503,599,627,370,496 and 253=9,007,199,254,740,992 the representable numbers are exactly the integers
This follows from the fact that the mantissa is 52 bits wide.
The other interesting fact about adding 251+252 is that it affects the mantissa only in the two highest bits - which are discarded anyway, since we are taking only its lowest 32 bits.
Last but not least: the sign.
IEEE 754 floating point uses a magnitude and sign representation, while integers on "normal" machines use 2's complement arithmetic; how is this handled here?
We talked only about positive integers; now suppose we are dealing with a negative number in the range representable by a 32-bit int
, so less (in absolute value) than (-2^31+1); call it -a
. Such a number is obviously made positive by adding the magic number, and the resulting value is 252+251+(-a).
Now, what do we get if we interpret the mantissa in 2's complement representation? It must be the result of 2's complement sum of (252+251) and (-a). Again, the first term affects only the upper two bits, what remains in the bits 0~50 is the 2's complement representation of (-a) (again, minus the upper two bits).
Since reduction of a 2's complement number to a smaller width is done just by cutting away the extra bits on the left, taking the lower 32 bits gives us correctly (-a) in 32 bit, 2's complement arithmetic.
链接地址: http://www.djcxy.com/p/72612.html上一篇: 位乘以16