Skip to content

adjusted values

4 messages · Cristiano Alessandro, Ben Bolker, Rune Haubo

#
Hi all,

I am fitting a linear mixed model with lme4 in R. The model has a single
factor (des_days) with 4 levels (-1,1,14,48), and I am using random
intercept and slopes.

Fixed effects: data ~ des_days
                 Value   Std.Error  DF   t-value p-value
(Intercept)  0.8274313 0.007937938 962 104.23757  0.0000
des_days1   -0.0026322 0.007443294 962  -0.35363  0.7237
des_days14  -0.0011319 0.006635512 962  -0.17058  0.8646
des_days48   0.0112579 0.005452614 962   2.06469  0.0392

I can clearly use the previous results to compare the estimations of each
"des_day" to the intercept, using the provided t-statistics. Alternatively,
I could use post-hoc tests (z-statistics):
"des_days14  = 0",
                      "des_days48 = 0");
Simultaneous Tests for General Linear Hypotheses

Fit: lme.formula(fixed = data ~ des_days, data = data_red_trf, random
= ~des_days |
    ratID, method = "ML", na.action = na.omit, control = lCtr)

Linear Hypotheses:
                 Estimate Std. Error z value Pr(>|z|)
des_days1 == 0  -0.002632   0.007428  -0.354    0.971
des_days14 == 0 -0.001132   0.006622  -0.171    0.996
des_days48 == 0  0.011258   0.005441   2.069    0.101
(Adjusted p values reported -- single-step method)


The p-values of the coefficient estimates and those of the post-hoc tests
differ because the latter are adjusted with Bonferroni correction. I wonder
whether there is any form of correction in the coefficient estimated of the
LMM, and which p-values are more appropriate to use.

Thanks
Cristiano
#
summary() via lmerTest incorporates finite-size corrections, but not
multiple-comparisons corrections.  glht does the opposite.  In this case
your finite-size corrections are pretty much irrelevant though (in this
context 962 \approx infinity).

  By convention, people don't usually bother with MC corrections when
they're testing pre-defined contrasts from a single model, but I don't
know that there's hard-and-fast rule (if I were testing the effects of a
large number of treatments within a single model I might indeed use MC;
I probably wouldn't bother for n=4).

  I don't know exactly what kind of MC correction glht does, but it
probably shouldn't be Bonferroni (which is very conservative, and
ignores correlations among the tests).
On 18-03-22 01:28 PM, Cristiano Alessandro wrote:
#
Maybe we are confusing ourselves here. Christiano, you say that you
are using lme4, but the output looks more like that from lme (nlme
package). If the latter is the case, the lmerTest package is not
directly related to your situation.

Otherwise I agree with Ben that whether MC corrections are appropriate
depends on the context. And about the coefficients: they are not
adjusted or corrected.

Cheers
Rune
On 22 March 2018 at 19:08, Ben Bolker <bbolker at gmail.com> wrote:
#
Thanks for the help, and sorry for the mix-up. Rune you are right, I am
using nlme. T

he summary of glht can do various kind of correction: the one I used to
produce the posted results is called "single-step", but I can also use
"Shaffer" and "Westfall" and other (including Bonferroni), if necessary.
The real question is whether, in this case, I should perform MC corrections
(i.e. post-hoc with glht), or I can directly take the p-values associated
the the pre-defined contrasts. I read somewhere that there are statistical
reasons for not trusting the latter. In fact, some packages for LMM do not
even provide such p-values.

Do you have suggestions here? And if I went for MC, what correction would
you use?

Thanks again
Cristiano
On Thu, Mar 22, 2018 at 3:16 PM, Rune Haubo <rune.haubo at gmail.com> wrote: