Skip to content
Prev 60651 / 63424 Next

Matrix issues when building R with znver3 architecture under GCC 11

On 4/13/22 11:20, Kieran Short wrote:
Right, but something must be broken. You might get specific comments 
from the Matrix package maintainer, but it would help at least 
minimizing that failing example to some commands you can run in R 
console, and showing the differences in outputs.
Ok. The default optimization options used by R on selected current and 
future versions of GCC and clang also get tested via checking all of 
CRAN contributed packages. This testing sometimes finds errors not 
detected by "make check-all", including bugs in GCC. You would need a 
lot of resources to run these checks, though. In my experience it is not 
so rare that a bug (in R or GCC) only affects a very small number of 
packages, often even only one.
That depends on the developer and the calculations, and on your goals - 
what you want to measure or show. I don't have a simple advice. If you 
are considering this for your own work, I'd recommend measuring some of 
your workloads. Also you can extrapolate from your workloads (from where 
time is spent in them) what would be a relevant benchmark. For example, 
if most time is spent in BLAS, then it is about finding a good optimized 
implementation (and for that checking the impact of the optimizations). 
Similarly, if it is some R package (base, recommended, or contributed), 
it may be using a computational kernel written in C or Fortran, 
something you could test separately or with a specific benchmark. I 
think it would be unlikely that CPU-specific C compiler optimizations 
would substantially speed up the R interpreter itself.

For just deciding whether -fno-expensive-optimization negates the gains, 
you might look at some general computational/other benchmarks (not R). 
If it negated it even on benchmarks used by others to present the gains, 
then it probably is not worth it.

One of the things I did in the past was looking at timings of selected 
CRAN packages (longer running examples, packages with most reverse 
dependencies) and then looking into the reasons for the individual 
bigger differences. That was when looking at the impacts of the 
byte-code compiler. Unlikely worth the effort in this case. Also, 
primarily, I think the bug should be traced down and fixed, wherever it 
is. Only then the measuring would make sense.

Best
Tomas