Skip to content

[R-meta] Egger-type test for multi-level meta-analysis

4 messages · Guido Schwarzer, Wolfgang Viechtbauer, James Pustejovsky

#
Hi all,

Another question on multi-level models (while I am still waiting for an answer on my previous one ;-) ).

I would like to conduct a test for small-study effects for data from a three-level model, e.g., for the dataset dat.konstantopoulos2011.

library("metafor")
library("metadat")
m.ml <- rma.mv(yi, vi, random = ~ 1 | district / school, data = dat.konstantopoulos2011)

If I understand James' comment from February 2018 correctly (https://stat.ethz.ch/pipermail/r-sig-meta-analysis/2018-February/000610.html), I could conduct an Egger-type test for small study effects by using cluster-robust variance estimation following a multi-level meta-regression with the standard error as moderator:

sse.ml <- update(m.ml, mods = sqrt(vi))
library("clubSandwich")
conf_int(sse.ml, vcov = "CR2")

Did I get this right?

Best wishes,
Guido
#
Correct. The slope on sqrt(vi) indicates an association between SE and
effect size. The intercept is a "PET"-style estimate of the average effect
size in a population of infinitely large studies (i.e. SE = 0).

James

On Mon, Jun 19, 2023 at 9:21?AM Dr. Guido Schwarzer via R-sig-meta-analysis
<r-sig-meta-analysis at r-project.org> wrote:

            

  
  
#
Just me thinking out loud here for the moment:

While the extension of the 'Egger regression test' (and the PET/PEESE methods) to multilevel models is straightforward, a bit of thinking is required as to what we are really trying to capture by adding something like sqrt(vi) (or just vi or any other transformation thereof) to the model as a predictor. Selective reporting within studies of the larger / significant effects? Or selective reporting of studies in general? In the latter case, selection may depends on whether at least one effect is significant / large, the focal one (in case there is a defined primary endpoint), or something else. One could argue that cor(sqrt(vi), yi) (in essence what we are examining with the regression test) might capture a bit of all of this and maybe that's the best we can generally do.

Best,
Wolfgang
#
Hi Wolfgang,

Very much agree. My understanding of Egger/PET/PEESE in the multilevel
setting is that they are still essentially tests/adjustments for "small
study effects." Typically, the SEs of effect sizes from a given study will
be quite similar to each other, so sqrt(vi) is mostly going to vary at the
study level rather than within study. As a result, Egger's test here is
really looking at the correlation between the study's SE (mostly a function
of sample size and study design) and the *study-level average effect size
estimate*. Using different working models will change the weighting of that
meta-regression, but it's still almost entirely about study-level stuff.
Thus, these tests/adjustments are only really going to be sensitive to
forms of selective reporting that affect the study-level average effect.

I like to think of these tests/adjustments as kind of "agnostic" to the
specific form of selective reporting. The best they can do is flag a fishy
pattern (i.e., correlation between study size and average ES) and then it's
up to the analyst to decide whether it's something to worry about or not.
But since these methods aren't based on a specific generative model, they
can't really tell us much about *how* selective reporting might be
happening or the specific *degree* of selective reporting in the domain
under study. In contrast, selection models (like Vevea-Hedges selection
models or the Copas-type models) are more descriptive and specific about
selective reporting mechanisms.

James

On Tue, Jun 20, 2023 at 7:51?AM Viechtbauer, Wolfgang (NP) via
R-sig-meta-analysis <r-sig-meta-analysis at r-project.org> wrote: