Statistical significance of random-effects (lme4 or others)
Also see RLRsim, pbkrtest. lmerTest::ranova() is more convenient (and sounds like what you're looking for), but RLRsim and pbkrtest are going to be more accurate for individual comparisons.
On 9/7/20 2:13 AM, Daniel L?decke wrote:
Hi Simon, I'm not sure if this is a useful question. The variance can / should never be negative, and it usually is always above 0 if you have some variation in your outcome depending on the group factors (random effects). Packages I know that do some "significance testing" or uncertainty estimation of random effects are lmerTest::ranova() (quite well documented what it does) or "arm::se.ranef()" resp. "parameters::standard_error(effects = "random")". The two latter packages compute standard errors for the conditional modes of the random effects (what you get with "ranef()"). Best Daniel -----Urspr?ngliche Nachricht----- Von: R-sig-mixed-models <r-sig-mixed-models-bounces at r-project.org> Im Auftrag von Simon Harmel Gesendet: Montag, 7. September 2020 06:28 An: Juho Kristian Ruohonen <juho.kristian.ruohonen at gmail.com> Cc: r-sig-mixed-models <r-sig-mixed-models at r-project.org> Betreff: Re: [R-sig-ME] Statistical significance of random-effects (lme4 or others) Dear J, My goal is not to do any comparison between any models. Rather, for each model I want to know if the variance component is different from 0 or not. And what is a p-value for that. On Sun, Sep 6, 2020 at 11:21 PM Juho Kristian Ruohonen < juho.kristian.ruohonen at gmail.com> wrote:
A non-statistician's two cents:
1. I'm not sure likelihood-ratio tests (LRTs) are valid at all for
models fit using REML (rather than MLE). The anova() function seems to
agree, given that its present version (4.0.2) refits the models using
MLE
in order to compare their deviances.
2. Even when the models have been fit using MLE, likelihood-ratio
tests for variance components are only applicable in cases of a single
variance component. In your case, this means a LRT can only be used for
*m1
vs ols1* and *m2 vs ols2*. There, you simply divide the p-value
reported by *anova(m1, ols1) *and *anova(m2, ols2)* by two. Both are
obviously extremely statistically significant. However, models *m3 *and
*m4* both have two random effects. The last time I checked, the
default assumption of a chi-squared deviance is no longer applicable in
such cases, so the p-values reported by Stata and SPSS are only
approximate
and tend to be too conservative. Perhaps you might apply an information
criterion instead, such as the AIC
<https://bbolker.github.io/mixedmodels-misc/glmmFAQ.html#can-i-use-aic-for-m ixed-models-how-do-i-count-the-number-of-degrees-of-freedom-for-a-random-eff ect>
. Best, J
--
_____________________________________________________________________ Universit?tsklinikum Hamburg-Eppendorf; K?rperschaft des ?ffentlichen Rechts; Gerichtsstand: Hamburg | www.uke.de Vorstandsmitglieder: Prof. Dr. Burkhard G?ke (Vorsitzender), Joachim Pr?l?, Prof. Dr. Blanche Schwappach-Pignataro, Marya Verdel _____________________________________________________________________ SAVE PAPER - THINK BEFORE PRINTING _______________________________________________ R-sig-mixed-models at r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models