Advice debugging M1Mac check errors
Simon's comments add another viewpoint to mine. My own knowledge of the impact of "disable-long-double" does not include an understanding of exactly what effect this has. One needs to spend a lot of time and effort with excruciating details. Fortunately, we can usually get away with 64 bit FP arithmetic for almost all applications. I suspect applications that need really long precision are likely best handled with special hardware. JN
On 2024-02-04 16:46, Simon Urbanek wrote:
On Feb 5, 2024, at 12:26 PM, Duncan Murdoch <murdoch.duncan at gmail.com> wrote: Hi John. I don't think the 80 bit format was part of IEEE 754; I think it was an Intel invention for the 8087 chip (which I believe preceded that standard), and didn't make it into the standard. The standard does talk about 64 bit and 128 bit floating point formats, but not 80 bit.
Yes, the 80 bit was Intel-specific (motivated by internal operations, not as external format), but as it used to be most popular architecture, people didn't quite realize that tests relying on Intel results will be Intel-specific (PowerPC Macs had 128-bit floating point, but they were not popular enough to cause trouble in the same way). The IEEE standard allows "extended precision" formats, but doesn't prescribe their format or precision - and they are optional. Arm64 CPUs only support 64-bit double precision in hardware (true both on macOS and Windows), so only what is in the basic standard. There are 128-bit floating point solutions in software, but, obviously, they are a lot slower (several orders of magnitude). Apple has been asking for priorities in the scientific community and 128-bit floating number support was not something high on people's priority list. It is far from trivial, because there is a long list of operations (all variations of the math functions) so I wouldn't expect this to change anytime soon - in fact once Microsoft's glacial move is done we'll be likely seeing only 64-bit everywhere. That said even if you don't have a arm64 CPU, you can build R with --disable-long-double to get closer to the arm64 results if that is your worry. Cheers, Simon
On 04/02/2024 4:47 p.m., J C Nash wrote:
Slightly tangential: I had some woes with some vignettes in my optimx and nlsr packages (actually in examples comparing to OTHER packages) because the M? processors don't have 80 bit registers of the old IEEE 754 arithmetic, so some existing "tolerances" are too small when looking to see if is small enough to "converge", and one gets "did not converge" type errors. There are workarounds, but the discussion is beyond this post. However, worth awareness that the code may be mostly correct except for appropriate tests of smallness for these processors. JN On 2024-02-04 11:51, Dirk Eddelbuettel wrote:
On 4 February 2024 at 20:41, Holger Hoefling wrote:
| I wanted to ask if people have good advice on how to debug M1Mac package
| check errors when you don?t have a Mac? Is a cloud machine the best option
| or is there something else?
a) Use the 'mac builder' CRAN offers:
https://mac.r-project.org/macbuilder/submit.html
b) Use the newly added M1 runners at GitHub Actions,
https://github.blog/changelog/2024-01-30-github-actions-introducing-the-new-m1-macos-runner-available-to-open-source/
Option a) is pretty good as the machine is set up for CRAN and builds
fast. Option b) gives you more control should you need it.
Dirk
______________________________________________ R-devel at r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
______________________________________________ R-devel at r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
______________________________________________ R-devel at r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel