Skip to content
Prev 2566 / 5636 Next

[R-meta] Dealing with effect size dependance with a small number of studies

Hi Danka,

Responses inline below.

Kind Regards,
James
On Mon, Jan 4, 2021 at 5:41 AM Danka Puric <djaguard at gmail.com> wrote:

            
You have the wrong syntax here. If you want to specify a multi-level
meta-analysis model in which effecstares nested within studies, use the "/"
character to indicate nesting:
  nested_UN <-rma.mv(ES_corrected, SV, random = ~ IDstudy / IDeffect,
data=MA_dat_raw)
Or if you want to include sub-samples as an intermediate level:
  nested_UN <-rma.mv(ES_corrected, SV, random = ~ IDstudy / IDsubsample /
IDeffect, data=MA_dat_raw)

Both of these will give estimates of average effect size and variance
component estimates. However, the corresponding standard errors of the
average effect sizes are based on the assumption that the entire model is
correctly specified. RVE relaxes that assumption. Thus, the decision to use
RVE or not should be based on a judgement about the plausibility of the
model's assumptions (rather than on whether you can get a model to
converge).
This seems fine. One step better would be to consider whether the effect
size estimates within a given sub-sample have correlated sampling errors.
This would be the case, for instance, if the effect sizes are for different
outcome measures (or measures of the same outcome at different points in
time), assessed on the same sub-sample of individual participants. Details
on how to do this can be found here:
https://www.jepusto.com/imputing-covariance-matrices-for-multi-variate-meta-analysis/
The nice thing about RVE is that the standard errors for the average effect
are calculated in a way that does not require the correct specification of
the random effects structure. As a result, you should get very similar
standard errors regardless of whether you include random effects for all
three levels or whether you exclude a level. However, the variance
component estimates are still based on an assumption that the model is
correctly specified. I think it would therefore be preferable to use the
model that captures the theoretically relevant levels of variation, so in
this case, all three levels.
It depends on what you mean by "take care of" this issue. RVE does not
really solve the problem of how to model within- versus between-sample
variation in a predictor, but it does mean that you can be less worried
about getting the variance structure exactly correct. To address the issue
you raise, one thing you could do is include a version of the moderator
that is centered within each study, in addition to the study-level mean of
the moderator. This would let you parse out "same-level" versus
"different-level" variation in the moderator. However, with so few studies
that have more than one level of the moderator, the within-study version of
the predictor will have very little variation and so it will come with a
large standard error.