Skip to content
Prev 16403 / 20628 Next

R-sig-mixed-models Digest, Vol 134, Issue 42

Thanks for help

Have now successfully installed emmeans
Now have to figure out why SPSS and R do not agree, and why R thinks df =Inf
________
R       glmer                                                   MIXED
Rab     Between emmean  SE      df      asymp.LCL       asymp.UCL       Rab     Between Mean    SE      lcl     ucl
1       1       -.316   .170    Inf     -.650   .018    1       1       -.267   .148    -.562   .027
2       1       -.820   .172    Inf     -1.158  -.482   2       1       -.688   .169    -1.023  -.352
3       1       -1.838  .183    Inf     -2.196  -1.480  3       1       -1.550  .188    -1.924  -1.176
4       1       -2.558  .198    Inf     -2.946  -2.170  4       1       -2.168  .230    -2.624  -1.712
1       2       .357    .165    Inf     .034    .681    1       2       .297    .144    .011    .583
2       2       -.895   .167    Inf     -1.223  -.567   2       2       -.745   .165    -1.073  -.417
3       2       -2.607  .192    Inf     -2.984  -2.230  3       2       -2.238  .235    -2.705  -1.772
4       2       -2.891  .201    Inf     -3.285  -2.497  4       2       -2.498  .255    -3.005  -1.992
        Df      SS      MS      F                       Source  F       df1     df2     Sig.
Rab     3       870.17  290.06  290.06                  Rab     66.76   3       94      .000000
Between 1       .12     .12     .12                     Between .38     1       93      .539643
Rab:Between     3       63.20   21.07   21.07                   Rab*Between     5.31    3       94      .001998
                                                        Corrected       30.64   7       101     .000000
Somehow need to tell R about the error structure

NB lmer on logit (FreqPos/Nmax) is not the correct binomial model, because it does not take into account the binomial variability inherent in measuring any binary proportion
see Jaeger, T. F. (2008). Categorical Data Analysis: Away from ANOVAs (transformation or not) and towards Logit Mixed Models. J Mem Lang, 59(4), 434-446.
whats more I have tested that there are differences on 17 empirical data sets
On 27 Feb 2018, at 13:06, r-sig-mixed-models-request at r-project.org<mailto:r-sig-mixed-models-request at r-project.org> wrote:
Send R-sig-mixed-models mailing list submissions to
r-sig-mixed-models at r-project.org<mailto:r-sig-mixed-models at r-project.org>

To subscribe or unsubscribe via the World Wide Web, visit
https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models
or, via email, send a message with subject or body 'help' to
r-sig-mixed-models-request at r-project.org

You can reach the person managing the list at
r-sig-mixed-models-owner at r-project.org

When replying, please edit your Subject line so it is more specific
than "Re: Contents of R-sig-mixed-models digest..."


Today's Topics:

  1. Re: Including random effects creates structure in the
     residuals (Paul Johnson)
  2. Re: means , CIs from lmer, glmer (Ben Bolker)
  3. Re: Including random effects creates structure in the
     residuals (Pierre de Villemereuil)

----------------------------------------------------------------------

Message: 1
Date: Tue, 27 Feb 2018 11:03:17 +0000
From: Paul Johnson <paul.johnson at glasgow.ac.uk>
To: Pierre de Villemereuil <pierre.de.villemereuil at mailoo.org>
Cc: R SIG Mixed Models <r-sig-mixed-models at r-project.org>
Subject: Re: [R-sig-ME] Including random effects creates structure in
the residuals
Message-ID: <64A7142C-55FA-43E8-B457-8C80D8AA1DE4 at glasgow.ac.uk>
Content-Type: text/plain; charset="utf-8"

Hi Pierre,

I don?t think there is a problem with the residuals. Just to check, the problem you see is that there?s a linear trend in the residuals vs fitted values plot when the ID random effect is included (which in a standard OLS LM would be impossible).

The reason for the correlation is that the fitted values contain the ID random effects, and these are inevitably correlated with the residuals. My intuitive understanding of this is as follows. Say some students sit a test twice, on two separate days. A student's score on a given day will be a combination of their ability (ID random effect) and unmeasured (i.e. noise) factors, like how the student was feeling on that day. Assuming that both ability and luck contribute substantially to the scores, it?s inevitable that the extreme upper end of the distribution will be populated by scores from students who are both able (high ID random effect) and were lucky on that day (high error residual). The same goes in the negative direction for the lower end of the distribution. This the basis of is regression to the mean - if we pick a student with an extreme score and re-test them, we expect their score to be less extreme. If I remember correctly it?s fairly straightforward to predict the correlation of the residuals and fitted values for a given model.

On the broader topic of checking residuals from GLMMs?
I wrote a simple function to check residuals from lme4 fits by simulating residuals from the fitted model and plotting them on top of the real residuals. If they look similar on several simulated data sets them I?m reassured that the model fits well. This is particularly useful for non-normal GLMMs where (despite popular belief) there's no assumption of normality of the Pearson residuals.

library(devtools)
install_github("pcdjohnson/GLMMmisc")
library(GLMMmisc)
library(lme4)
fm1 <- lmer(Reaction ~ Days + (Days | Subject), sleepstudy)
sim.residplot(fm1)
# note the correlation between the residuals and the fitted values

Florian Hartig has written a more sophisticated package that uses the same basic idea called DHARMa:
https://cran.r-project.org/web/packages/DHARMa/index.html
His blog post:
https://theoreticalecology.wordpress.com/2016/08/28/dharma-an-r-package-for-residual-diagnostics-of-glmms/

All the best,
Paul
On 27 Feb 2018, at 08:53, Pierre de Villemereuil <pierre.de.villemereuil at mailoo.org> wrote:
Dear all,

I have an issue that I can't get my head around. I am working on a human cohort dataset studying heart rate. We have repeated measures at several time points and a model with different slopes according to binned age categories (the variable called "broken" hereafter, for "broken lines").

My issue is that when I include an individual ID effect (to account for the repeated measures), I obtain structured residuals while this is not the case for a model without this effect.

Here are my models:
mod_withID <- lmer(cardfreq ~ sex +
broken +
age:broken +
betabloq +
cafethe +
tabac +
alcool +
(1|visite) +
(1|id),
  data = sub)
mod_noID <- lmer(cardfreq ~ sex +
broken +
age:broken +
betabloq +
cafethe +
tabac +
alcool +
(1|visite),
 data = sub)

The AIC (computed with a fit with REML = FALSE) clearly favours the model including the ID effect:
AIC(mod_withID)
75184.51
AIC(mod_noID)
76942.09

Yet, the model including the ID effect suffers from a bad fit from the residuals point of view (structured residuals) as the plots below show:
- The residuals with the ID effect:
https://ibb.co/b6WsFx
- The residuals without the ID effect:
https://ibb.co/fFVDNc
Message-ID: <1A419614-C610-464B-9343-2B284C2DBD75@herts.ac.uk>
In-Reply-To: <mailman.16168.1318.1519736793.1673.r-sig-mixed-models@r-project.org>