Both of these will give estimates of average effect size and variance
component estimates. However, the corresponding standard errors of the
average effect sizes are based on the assumption that the entire model is
correctly specified. RVE relaxes that assumption. Thus, the decision to use
RVE or not should be based on a judgement about the plausibility of the
model's assumptions (rather than on whether you can get a model to
converge).
2. If we want to use RVE, would the following model which includes random
effects at all three levels (effect size, subsample, study) be appropriate
in combination with clubSandwich package robust coefficient estimates?
model <-rma.mv(ES_corrected, SV, random = ~ 1 | IDstudy / IDsubsample/
IDeffect, data=MA_dat_raw)
coef_test(model, vcov = "CR2")
Or should something else be done in order to adequately address the issue
of effect size dependence?
This seems fine. One step better would be to consider whether the effect
size estimates within a given sub-sample have correlated sampling errors.
This would be the case, for instance, if the effect sizes are for different
outcome measures (or measures of the same outcome at different points in
time), assessed on the same sub-sample of individual participants. Details
on how to do this can be found here:
https://www.jepusto.com/imputing-covariance-matrices-for-multi-variate-meta-
analysis/
3. The variances for this model are:
Variance Components:
estim sqrt nlvls fixed factor
sigma^2.1 0.0589 0.2427 20 no IDstudy
sigma^2.2 0.0250 0.1583 53 no IDstudy/IDsubsample
sigma^2.3 0.0014 0.0373 69 no IDstudy/IDsubsample/IDeffect
In other words, there is very little variance at the level of IDeffect,
after Study and Subsample have been taken into account. The profile
likelihood plot for sigma^2.3 does, however, appear to peak at the
corresponding value when "zoomed in" (with xlim=c(0,0.01)).
Should we consider this a satisfactory model, or is the variance at the
level of IDeffect too small to be meaningful? Presumably, this has to do
with the fact that the majority of subsamples (43 out of 53) only
contribute to the MA with one effect size, for 8 subsamples there are 2 ES
per subsample, and in two instances 5 ESs per subsample.
Would an acceptable alternative model be:
nested <- rma.mv(ES_corrected, SV, random = ~ 1 | IDstudy/IDeffect,
data=MA_dat_raw)
Here, we've excluded random effects at the subsample level, because it made
more sense to include random effects at the level of individual effect
sizes and the two variables have a substantial overlap. The variances for
this model seem adequate (and their profile plots look fine, too).
Variance Components:
estim sqrt nlvls fixed factor
sigma^2.1 0.0678 0.2604 20 no IDstudy
sigma^2.2 0.0150 0.1223 69 no IDstudy/IDeffect
The nice thing about RVE is that the standard errors for the average effect
are calculated in a way that does not require the correct specification of
the random effects structure. As a result, you should get very similar
standard errors regardless of whether you include random effects for all
three levels or whether you exclude a level. However, the variance
component estimates are still based on an assumption that the model is
correctly specified. I think it would therefore be preferable to use the
model that captures the theoretically relevant levels of variation, so in
this case, all three levels.