Skip to content
Prev 910 / 20628 Next

Proper analysis for the Machines dataset in lme4

Dear Michael,

I just want to prevent a misreading of some of your comments. As far
as m1 vs mr1 and m2 vs. m2r are concerned, lme and lmer produce
identical estimates for fixed and random effects. The anova(m1, m2)
and anova(m1r, m2r)  also produce identical results, if you use
method="ML".  Crossed-random effects are not easily specified in lme,
but in principle they can. So it is not correct to say that they clash
on m3 and m3r either.

The comparison of m1 and m2 (or m1r and m2r) is conceptually
questionable. m1 assumes there are 6 Workers; m2 assumes that there
are 18 Workers, that is different groups of 6 persons worked on each
of the 3 machines. Presumably, the experimental design decides whether
m1 (m1r) or m2 (m2r) is the correct choice.
     There is a meaningful comparison between m3r and m1r. If m3r is
significantly better it means that the six workers differ reliably not
only in mean performance (intercept) but also in the size of the
machine effect (i.e., there is reliable variance in Machine effects
between Worker). They actually do:
Data: mach
Models:
mr1: score ~ Machine + (1 | Worker)
mr3: score ~ Machine + (Machine | Worker)
      Df     AIC     BIC  logLik  Chisq Chi Df Pr(>Chisq)
mr1.p  5  303.70  313.65 -146.85
mr3.p 10  236.42  256.31 -108.21 77.285      5    3.1e-15 ***

Thus, if m1 is the correct design, then m3 is an improvement. If m2 is
the correct design, then that is it.

The Baayen example did not involve a comparison between lme and lmer,
as far as I could see.

I do not know much about gmodels. So I leave this part to somebody else.

Best
Reinhold
On Sun, Apr 27, 2008 at 4:49 PM, Michael Kubovy <kubovy at virginia.edu> wrote: