Skip to content
Prev 15689 / 20628 Next

linear mixed models explained variances

Hi Lorin,
A side point: always use the data argument, so that the fitting function takes all the data from the same data frame:

          fit18=lmer(observed~predicted*MMIdata+(1|nursery), data = recru.total.merge)

This is less error prone than what you?ve done.
What do you mean by significance of the model? This implies that you want calculate a p-value for some null hypothesis. What null hypothesis? The null hypothesis that all the fixed parameters are zero? You can get this from 
anova(fit18, update(fit18, . ~ (1|nursery)))  # the models will be automatically refitted using ML
R2C is frequently interpreted as the variance explained by both the fixed and random effect, although I prefer to think of the random effect as a residual (unexplained) variance at a higher level (here unexplained variation between nurseries).
Yes, that?s the proportion of variance explained by the random effects.
I don?t understand the terms in this equation. To me, an intuitive way of gauging the contribution of a fixed effect would be to fit the model with and without the fixed effect and compare (subtract) either the marginal R-squared, or compare the fixed effect variances. The total variance of the fixed effects can be calculated as:

      var(model.matrix(fit) %*% fixef(fit))

This should give the same result as 

      var(predict(fit, re.form = ~ 0))


[Although I think strictly the model sums of squares should be compared instead, which will just be the var(?) * (n - 1)?]

Good luck,
Paul