To the extent that it may be helpful here and I can do more if need be, I built 32 bit R 2.12.0 patched on Snow Leopard (10.6.4), using the R BLAS rather than Apple's veclib. This is on an early 2009 17" MBP with a 2.93 Ghz Core 2 Duo (MacBookPro5,2) and 4Gb of RAM.
Based upon Doug's comment in this thread that the issue may be related to the use of Apple's veclib BLAS, as opposed to R's reference BLAS, I ran some tests.
My config includes:
--without-blas --without-lapack
just to be sure that the above is the correct invocation, based upon what I found online.
Using this build, with all CRAN packages freshly installed using this build, I ran the example used here with lme4 0.999375-35. I get:
library(lme4)
y <- (1:20)*pi; x <- (1:20)^2;group <- gl(2,10)
M2. <- lmer (y ~ 1 + x + (1 + x | group))
M2 <- lmer (y ~ x + ( x | group))
identical(fixef(M2), fixef(M2.))
[1] TRUE
I then created a function so that I could use replicate() to run this test a "larger" number of times:
testlme4 <- function()
{
y <- (1:20)*pi; x <- (1:20)^2;group <- gl(2,10)
M2. <- lmer (y ~ 1 + x + (1 + x | group))
M2 <- lmer (y ~ x + ( x | group))
identical(fixef(M2), fixef(M2.))
}
RES <- replicate(1000, testlme4())
RES
TRUE
1000
Does the example need to be run a "very large" number of times to be sure that it does not fail, or is the above a reasonable indication that the use of R's BLAS is a more appropriate default option for R on OSX? If I am not mistaken (and somebody correct me if wrong), R's BLAS is the default on Windows and Linux (from my recollections on Fedora). Why should OSX be different in that regard?