Skip to content
Prev 12379 / 20628 Next

Post model fitting checks in Metafor (rma.mv)

Regarding the profile plot for sigma2: I am not quite sure I understand. Does it 'peak' at zero? So, is the estimate (essentially) zero then? That would be okay (essentially means that the variability due to 'Study' is no larger than what would be expected due to the other variance components and/or sampling variability). The issue of zero variance components was also recently (and also in past) discussed on this list (not with respect to meta-analysis, but it's the same issue).

Regarding the resid-fitted plot: So, looks like you have some missings, so the two vectors end up being of different length. This should do it:

options(na.action = "na.pass")
plot(fitted(res), rstandard(res)$z, pch=19)

Also explained here: http://www.metafor-project.org/doku.php/tips:handling_missing_data

Regarding overall effects: I personally don't think an 'overall' effect makes much sense when the effect size appears to be related to a number of moderators/covariates. Take the simplest situation, where the true effect is of size theta_1 for level 1 of a dichotomous covariate and theta_2 for level 2. Now suppose we ignore that covariate and fit a random-effects model. Essentially, that is a misfitted model, because the model assumes normally distributed true effects (and not two point massess). Also, where the 'average' effect then falls depends on how many studies in the sample are at level 1 and at level 2 of that covariate. That doesn't seem that sensible to me. So, instead, we can fit the model with the covariate and then compute predicted values, for example, for level 1 or level 2. Or, if you really want an 'overall' effect, we could use a sort of 'lsmeans' approach and say: Let's assume that in the population of studies, 50% are at level 1 and 50% at level 2, so let's compute the predicted effect for such a population (essentially fill in 0.5 for the 'dummy' variable when computing the predicted value). But I would then describe explicitly that this is what was done (as it makes clear what the assumption is about the population of studies).

I discuss this issue a bit in this article:

Viechtbauer, W. (2007). Accounting for heterogeneity via random-effects models and moderator analyses in meta-analysis. Zeitschrift f?r Psychologie / Journal of Psychology, 215(2), 104-121.

(not the 'lsmeans' idea, but the problem of fitting random-effects models when there is a relevant moderators/covariate). If you can't access the article, let me know and I'll send you a copy if you are interested.

Best,
Wolfgang