Skip to content
Prev 4710 / 5636 Next

[R-meta] Egger-type test for multi-level meta-analysis

Hi Wolfgang,

Very much agree. My understanding of Egger/PET/PEESE in the multilevel
setting is that they are still essentially tests/adjustments for "small
study effects." Typically, the SEs of effect sizes from a given study will
be quite similar to each other, so sqrt(vi) is mostly going to vary at the
study level rather than within study. As a result, Egger's test here is
really looking at the correlation between the study's SE (mostly a function
of sample size and study design) and the *study-level average effect size
estimate*. Using different working models will change the weighting of that
meta-regression, but it's still almost entirely about study-level stuff.
Thus, these tests/adjustments are only really going to be sensitive to
forms of selective reporting that affect the study-level average effect.

I like to think of these tests/adjustments as kind of "agnostic" to the
specific form of selective reporting. The best they can do is flag a fishy
pattern (i.e., correlation between study size and average ES) and then it's
up to the analyst to decide whether it's something to worry about or not.
But since these methods aren't based on a specific generative model, they
can't really tell us much about *how* selective reporting might be
happening or the specific *degree* of selective reporting in the domain
under study. In contrast, selection models (like Vevea-Hedges selection
models or the Copas-type models) are more descriptive and specific about
selective reporting mechanisms.

James

On Tue, Jun 20, 2023 at 7:51?AM Viechtbauer, Wolfgang (NP) via
R-sig-meta-analysis <r-sig-meta-analysis at r-project.org> wrote: