Floating point issue
To be pedantic, the C standard does not guarantee that long double offers more precision than double. If R?s internal FP/decimal conversion routines produce a different result on platforms that support Intel's 80-bit precision vs. platforms that don?t, I would classify this as a bug in R. Available precision can affect numerical properties of algorithms but should not affect things like decimal to binary or via versa conversion ? it either produces the accurate enough number or it doesn?t. As a side note, I agree with Andre that relying on Intel?s extended precision in this day an age is not a good idea. This is a legacy feature from over forty years ago, x86 CPUs have been using SSE instructions for floating point computation for over a decade. The x87 instructions are slow and prevent compiler optimisations. Overall, I believe that R would benefit from dropping this legacy cruft. Not that there are too many places where it is used from what I see? Best, ? Taras Zakharko
On 11 Jul 2022, at 13:48, GILLIBERT, Andre <Andre.Gillibert at chu-rouen.fr> wrote:
From my current experiences, I dare to say that the M1 with all its speed is just a tad less reliable numerically than the Intel/AMD floating point implementations..
80 bits floating point (FP) numbers are great, but I think we cannot rely on it for the future. I expect, the marketshare of ARM CPUs to grow. It's hard to predict, but ARM may spread in desktop computers in a timeframe of 10 years, and I would not expect it to gain extended precision FP. Moreover, performance of FP80 is not a priority of Intel. FP80 in recent Intel microprocessors are very slow when using special representations (NaN, NA, Inf, -Inf) or denormal numbers. Therefore, it may be wise to update R algorithms to make them work quite well with 64 bits FP. -- Sincerely Andr? GILLIBERT [[alternative HTML version deleted]]
______________________________________________ R-devel at r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel