Skip to content

lmer code for multiple random slopes

2 messages · Peter R Law, Phillip Alday

#
Alas I am still puzzled. I have extracted some data in trial.txt (no categorical variables) and attached some code in trial.R. I am only using this data to test code, which I hope to apply to a larger data set obtained from multiple populations, so with more structure. So the results themselves are not important, only whether the code does what I think it should. It occurred to me to test each model with lme and lmer.

For a model with only a random intercept plus fixed effects, lme and lmer returned the same results, except that lme returns estimates with more decimal places (so here lmer returned zero variance for the random intercpt and lme a very small number).

Adding a random slope, the main difference in the results is that lme and lmer returned very different estimates of the slope variance. Is that surprising?

For a model with two random slopes, lme returned results as expected but lmer still claims there are 78 variance components, which is the number one would get from a 12 x 12 covariance matrix. Where the number 12 comes from is a mystery to me, especially as lme does what I expected (I ran this model with both raw data and normalized predictor values to see if something fishy was happening there but no):
Linear mixed-effects model fit by maximum likelihood
 Data: Trial
       AIC      BIC    logLik
  536.4562 559.4969 -258.2281

Random effects:
 Formula: ~P + A | Group
 Structure: General positive-definite, Log-Cholesky parametrization
            StdDev       Corr
(Intercept) 3.061144e-04 (Intr) P
P           1.535752e-09 0
A           8.517774e-13 0      0
Residual    7.929821e+00

Fixed effects: Response ~ P + A
                Value Std.Error DF    t-value p-value
(Intercept) 25.453398 10.828841 46  2.3505192  0.0231
P           -0.029307  0.035939 46 -0.8154787  0.4190
A            0.140819  0.282018 46  0.4993255  0.6199
 Correlation:
  (Intr) P
P -0.033
A -0.975 -0.173

Standardized Within-Group Residuals:
       Min         Q1        Med         Q3        Max
-1.8447131 -0.5563605 -0.2613520  0.2755547  5.5643551

Number of Observations: 74
Number of Groups: 26
Error: number of observations (=74) <= number of random effects (=78) for term (normP + normA | Group); the random-effects parameters and the residual variance (or scale parameter) are probably unidentifiable.

It didn't matter which pair of the three predictors I used as far as the error message lmer returned but lme only returned results for P+A, not any pair involving PR, which apparently created computational issues, distinct to the interpretation of the code.

In lmer, replacing the code (x+y|Group) by either (x+y||Group) or (X|Group) + (Y|Group) returned the expected results (in that lme4 interpreted the code as expected). Is there lme code for either of these models?

Many thanks for any help.

Peter

P. S. I just noticed that responding to Ben's email sends my email to Ben, not to r-sig. I hope that sending my email to r-sig is the right thing to do and doesn't break the chain to my previous email.


Sent with ProtonMail Secure Email.

??????? Original Message ???????
On Wednesday, February 17, 2021 10:40 PM, Peter R Law <prldb at protonmail.com> wrote:

            
-------------- next part --------------
An embedded and charset-unspecified text was scrubbed...
Name: Trial.txt
URL: <https://stat.ethz.ch/pipermail/r-sig-mixed-models/attachments/20210225/877b0eb9/attachment.txt>
Message-ID: <e2zoG5je3QHzcPl3dsttnLigMgzR8mqiUJAdgntQm9xPOX4j3SwlDytjMtbOFRj-vf5TjpHVRD7rzd54mvwMFm9OhW-ff4lfhR383lioPX4=@protonmail.com>
#
On 25/2/21 5:16 am, Peter R Law via R-sig-mixed-models wrote:
This is to be expected: lme can't handle singular models (e.g. models
with an exactly zero variance component), but lmer can. So lme just gets
as close as it can.
It depends on what's going wrong. Generally these should return
identical fits for converged models (with the fine print that there are
certain models that you can specify in one but not the other without a
lot of effort).
The list strips most attachment types, so your script didn't make it
through. But just your data was enough for me to find a few problems

First, you're fitting and displaying two different models here:
Second, when I try to fit the model that's given by the summary here, I
get a numerical error in lme:
Error in lme.formula(Response ~ P + A, random = ~P + A | Group, data =
Trial,  :
  nlminb problem, convergence error code = 1
  message = false convergence (8)


This makes me think that something is wrong wrong in both bits of
software, but maybe the versions on your local machine are catching it
This screams that your normP and normA variables aren't being handled as
continuous but rather categorical/factors. This means that you have a
lot more slopes being estimated, which the model survives, but not the
extra correlation parameters (hence the success when you suppress them
with the two syntax variants you mention below).
Yep! Frequently, we have the reverse problem: people forget to keep the
list in CC. So thanks! The matching in the threading is handled mostly
by the subject line.