Skip to content
Prev 3883 / 5636 Next

[R-meta] methods for assessing publication bias while accounting for dependency

In addition to Wolfgang's and Lukasz's suggestions, I would add that I find
the Mathur and Vanderweele approach pretty compelling. It is not exactly a
"bias adjustment" technique (as Trim and Fill or PET/PEESE purport to be)
but rather a sensitivity analysis, which examines hypothetical questions
such as:
* Supposing that statistically significant results are at most X times more
likely to be published than non-significant results, what is the maximum
degree of bias that would be expected in the overall average effect size
estimate?
* How strong would the selective publication process need to be to reduce
the overall average effect size estimate to no more than Y?
An interesting implication of their results is that there are scenarios
where an overall average effect size cannot possibly be reduced to null,
even with very extreme forms of selective publication.

James

On Mon, Feb 28, 2022 at 2:28 PM Lukasz Stasielowicz <
lukasz.stasielowicz at uni-osnabrueck.de> wrote: