Skip to content
Prev 4400 / 5632 Next

[R-meta] Three-level meta-analysis with different sources of dependency

Hi Wilma,

Combining the multi-level meta-analytic approach with RVE is one fairly
low-effort way to address the concern of dependent effect sizes. As far as
implementation, it is simply a matter of running the model results through
the robust() function in metafor. Here's an example, elaborating on the
script you linked to:

# Create multilevel meta-analytic object for overall pooled effect
overall <- rma.mv(yi, vi,
                  data = df,
                  level = 95,
                  method = "REML", # tau-squared estimator
                  slab = author_year, # study label
                  tdist = TRUE, # apply Knapp-Hartung adjustment for our
confidence intervals
                  random = list(~ 1 | study_id,
                                ~ 1 | esid)) # account for dependency in
the data
overall_robust <- robust(overall, cluster = study_id, clubSandwich = TRUE)
summary(overall_robust)

Here's an alternate syntax, using R's pipe operator:

# Create multilevel meta-analytic object for overall pooled effect
overall <- rma.mv(yi, vi,
                  data = df,
                  level = 95,
                  method = "REML", # tau-squared estimator
                  slab = author_year, # study label
                  tdist = TRUE, # apply Knapp-Hartung adjustment for our
confidence intervals
                  random = list(~ 1 | study_id,
                                ~ 1 | esid)) |>
  robust(cluster = study_id, clubSandwich = TRUE)
summary(overall)

With either syntax, you'll need to specify the cluster = argument to tell
metafor the level at which to cluster the robust variance estimator.
Setting clubSandwich = TRUE provides small-sample adjustments that have
better performance characteristics when the number of clusters is limited.

A further step would be to implement the correlated-and-hierarchical
effects working model rather than the multi-level meta-analysis (which, as
you noted, assumes independent effect size estimates within studies). The
idea here is to create an approximate sampling variance-covariance matrix
for the effect size estimates, to acknowledge that there is some dependence
in them, even if we're unsure about the exact degree of dependence. You can
implement this using metafor's vcalc() function. Here's a basic example,
assuming a correlation of .6 between effect size estimates from the same
study:

V <- vcalc(vi, cluster=study_id, obs=esid, data=df, rho=0.6)

Once you've got the V matrix, you feed it into the V argument of rma.mv()
as follows:
overall <- rma.mv(yi = yi, V = V,
                  data = df,
                  level = 95,
                  method = "REML", # tau-squared estimator
                  slab = author_year, # study label
                  tdist = TRUE, # apply Knapp-Hartung adjustment for our
confidence intervals
                  random = list(~ 1 | study_id,
                                ~ 1 | esid)) |>
  robust(cluster = study_id, clubSandwich = TRUE)
summary(overall)

You noted a potential concern that the reason for dependence differs from
study to study, which suggests that assuming the same level of correlation
(e.g., rho = .6) isn't very plausible. The vcalc() function has some
features that would let you make more elaborate assumptions based on timing
of measurements and such (see the documentation here:
https://wviechtb.github.io/metafor/reference/vcalc.html). Depending on how
big your concern is, perhaps it would be worth exploring these features. If
it's a small feature of the data, however, I think it would be pretty
reasonable and conventional to use a common correlation assumption, since
robust variance estimation / inference methods will work even if some
aspects of the working model aren't correctly specified.

James

On Tue, Feb 7, 2023 at 2:18 AM Wilma Charlott Theilig via
R-sig-meta-analysis <r-sig-meta-analysis at r-project.org> wrote: