Size/metric of variance components in lme and lmer
Dear Stuart, I think that the "extra" variance is substracted from the fixed effects. Which indicates that some of the information in your fixed effects was due to the levels of tid. But to make a fair comparison you should run both models with lmer. And then compare both the random effects variances and the fixed effect estimates. HTH, Thierry ------------------------------------------------------------------------ ---- ir. Thierry Onkelinx Instituut voor natuur- en bosonderzoek team Biometrie & Kwaliteitszorg Gaverstraat 4 9500 Geraardsbergen Belgium Research Institute for Nature and Forest team Biometrics & Quality Assurance Gaverstraat 4 9500 Geraardsbergen Belgium tel. + 32 54/436 185 Thierry.Onkelinx at inbo.be www.inbo.be To call in the statistician after the experiment is done may be no more than asking him to perform a post-mortem examination: he may be able to say what the experiment died of. ~ Sir Ronald Aylmer Fisher The plural of anecdote is not data. ~ Roger Brinner The combination of some data and an aching desire for an answer does not ensure that a reasonable answer can be extracted from a given body of data. ~ John Tukey
-----Oorspronkelijk bericht-----
Van: r-sig-mixed-models-bounces at r-project.org
[mailto:r-sig-mixed-models-bounces at r-project.org] Namens
Stuart Luppescu
Verzonden: dinsdag 9 maart 2010 1:21
Aan: r-sig-mixed-models at r-project.org
Onderwerp: [R-sig-ME] Size/metric of variance components in
lme and lmer
Hello, I have run two analyses, each with the same data set
and predictors. One is a nested model run with lme; the other
is a cross-classified model with lmer. The only difference
between the two models is the added random effect. For
example, the nested model statement looks like this:
nested.lm3 <- lme(final.points ~ -1 + gr10 + gr11 + gr12 +
per1 + per2 +
per4 + per5 + per6 + per7 + per8 + per9 + per10 + per11 + per12 +
cblackd + casiand + clatinod + cmale +
cssoc + cscon + cold4gr + cmlatent8 + computer +
...
jourlsm,
data=all.subj, random = ~ 1|sid, na.action=na.omit)
The cross-classified model looks like this:
lm4c <- lmer(final.points ~ -1 + gr10 + gr11 + gr12 + per1 +
per2 + per4
+ per5 + per6 + per7 + per8 + per9 + per10 + per11 + per12 +
cblackd + casiand + clatinod + cmale +
cssoc + cscon + cold4gr + cmlatent8 + computer +
...
jourlsm +
( 1 | sid) + (1 | tid), data=all.subj, REML=F, verbose=T)
The variance components for the nested model are:
Random effects:
Formula: ~1 | sid
(Intercept) Residual
StdDev: 0.8826577 0.9259174
for the cross-classified model:
Groups Name Variance Std.Dev.
sid (Intercept) 0.75426 0.86848
tid (Intercept) 0.39601 0.62929
Residual 0.68535 0.82786
If we square and sum the variance components for the nested
model, the total variance is about 1.64. For the
cross-classified model, the total variance is about 1.84.
Where did the additional variance come from?
Should I just interpret the size of the variance components
on a relative scale, are the units different, or what?
--
Stuart Luppescu -*-*- slu <at> ccsr <dot> uchicago <dot> edu
CCSR in UEI at U of C
_______________________________________________ R-sig-mixed-models at r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models
Druk dit bericht a.u.b. niet onnodig af. Please do not print this message unnecessarily. Dit bericht en eventuele bijlagen geven enkel de visie van de schrijver weer en binden het INBO onder geen enkel beding, zolang dit bericht niet bevestigd is door een geldig ondertekend document. The views expressed in this message and any annex are purely those of the writer and may not be regarded as stating an official position of INBO, as long as the message is not confirmed by a duly signed document.