0.1 + 0.2 != 0.3 revisited
On Mon, 09 Feb 2004 08:52:09 +0100, you wrote:
Hi, IEEE says that real numbers are normalized (a few below 10^(-16) may be not [gradual underflow]), so that they look like 0.1ddd2^ex. Then only ddd and ex are kept: 0.1 = 0.00011.. 2^0 = 0.11001100.. 2^(-3) -> (11001100.., -3)
Right, that's pretty much what I said, since 1.6 = 1.101100...
Both 0.1 and 0.2 are less than 1, so the n=52 count is wrong. I think 0.1 would be stored as (1 + 0.6)*2^(-4) and 0.2 would be stored as (1 + 0.6)*2^(-3),
You should expect 56 decimal (binary?) place accuracy on 0.1, 55 place accuracy on 0.2, and 54 place accuracy on 0.3. It's not surprising weird things happen!
I don *not* think so: all mantissas here have *52 binary* places!
Yes, but I was counting bits after the binary point, not bits that are stored. The latter is 52 for all numbers, but it translates into more or less bits after the binary point, depending on the magnitude of the exponent. You can argue that I got the exponent wrong (saying it was -4, when you say it's -3), and I could live with that. I was just following the Intel convention that the mantissa is 1.dddd.. instead of 0.1dddd.. . Duncan Murdoch