Skip to content

Inaccurate inaccuracies on Mac only?

3 messages · Kasper Daniel Hansen, Simon Urbanek, Peter Dalgaard

#
On Dec 2, 2013, at 8:35 PM, Kasper Daniel Hansen <kasperdanielhansen at gmail.com> wrote:

            
That is true, but it does indeed depend on the code. For example 2L - 1L will always yield the same result, not matter how often you run it.

But more to the point, IEEE just guarantees results *assuming* certain precision, but R will use more precision if available for some operations. Now, how much more precision there is available depends on the CPU and architecture, and I suspect that Hans was not sharing enough details. For example, I get the same answer on Ubuntu 12.04 LTS that I get on a Mac contrary to his claims - both with x86_64 architecture (which I suspect is the real difference). And this is not just about R - compilers will often use SIMD instructions instead of FP instructions where it is faster and doesn?t reduce the accuracy - but it may increase it.

Cheers,
Simon
#
On 03 Dec 2013, at 02:48 , Simon Urbanek <simon.urbanek at r-project.org> wrote:

            
And, although possibly not the case here, optimizing compilers may reorder FP operations in order to keep caches full and pipelines busy. Things like rewriting

x1*y1 + x2*y2 + x3*y3 + x4*y4

as

(x1*y1 + x2*y2) + (x3*y3 + x4*y4)

which is mathematically, but not computationally equivalent. The BLASen are full of that sort of stuff -- if you thought that a matrix multply is just a triple loop, have a look at what goes on in the ATLAS library.

Parallel algorithms are even trickier, because you may not have control over the order in which partial results arrive.

As a result, programmers have effectively given up the fine control needed to have exactly identical results on different platforms. 

(In legacy code, we have found at least one case where the original programmer had believed that an algorithm would converge to exact FP equality, but the optimizer had made it not always so, resulting in an infinite loop.)