Skip to content
Prev 60885 / 63421 Next

Floating point issue

The results are not exactly the same.  Notice that on Bill's system the 
bit pattern of 10^25 and 10000000000000000905969664 are the same, but 
not so on yours.  So there is a mismatch happening on parsing between 
your M1 mac and other's systems.

This is the main thing I wanted to point out.  But since I'm here I'm 
going to add some additional lazily researched speculation.

As Dirk points out, M1 does not have long double, and if you look at 
what I think is responsible for parsing of numbers like the ones we're 
discussing here, we see[1]:

     double R_strtod5(const char *str, char **endptr, char dec,
     		 Rboolean NA, int exact)
     {
         LDOUBLE ans = 0.0;

IIRC long double on systems that implement it as 80 bits (most that have 
x87 coprocessors) has 63-4 bits of precision, vs 53 for 64 bit long 
double.  Roughly speaking, that's 19-20 digits of base 10 precision for 
long double, vs 15-16 for 64 bit double.  Then:

     > substr(rep("10000000000000000905969664", 2),  c(1, 1), c(16, 20))
     [1] "1000000000000000"     "10000000000000000905"

Again, I have not carefully researched this, but it seems likely that 
parsing is producing different a different outcome in this case because 
the intermediate values generated can be kept at higher precision on 
systems with 80 bit long doubles prior to coercing to double for the 
final result.

IIRC, if you need invertible deparsing/parsing I think you can use:

     deparse(1e25, control=c('hexNumeric'))

Although I don't have an 80-bit-less system to test on (and I am too 
lazy to recompile R without long double to test).

Best,

B.

[1]: https://github.com/r-devel/r-svn/blob/master/src/main/util.c#L1993
On 7/10/22 5:38 PM, Antoine Fabri wrote: