Skip to content

[R-meta] methods for assessing publication bias while accounting for dependency

3 messages · Lukasz Stasielowicz, James Pustejovsky, Gerta Ruecker

#
Dear Brendan,

unsurprisingly Wolfgang was faster than me so I'll just add one more 
reference (with further references) if your curious about the problems 
of some methods (e.g. trim and fill) even in a basic two-level 
meta-analysis:
Carter, E. C., Sch?nbrodt, F. D., Gervais, W. M., & Hilgard, J. (2019). 
Correcting for Bias in Psychology: A Comparison of Meta-Analytic 
Methods. Advances in Methods and Practices in Psychological Science, 
115?144. https://doi.org/10.1177/2515245919847196


One other possibility to address publication bias when dealing with 
dependent effect sizes is to conduct a moderator analysis comparing 
journal articles with other sources (e.g. conference proceedings, 
dissertations). If one is willing to assume that the latter are more 
similar to unpublished literature than journal articles then the results 
of this moderator analysis approximate the mangnitude of publication 
bias. Obviously, it is only some kind of sensitivity analysis and not 
the perfect estimate of publication bias.


Best,
Lukasz
#
In addition to Wolfgang's and Lukasz's suggestions, I would add that I find
the Mathur and Vanderweele approach pretty compelling. It is not exactly a
"bias adjustment" technique (as Trim and Fill or PET/PEESE purport to be)
but rather a sensitivity analysis, which examines hypothetical questions
such as:
* Supposing that statistically significant results are at most X times more
likely to be published than non-significant results, what is the maximum
degree of bias that would be expected in the overall average effect size
estimate?
* How strong would the selective publication process need to be to reduce
the overall average effect size estimate to no more than Y?
An interesting implication of their results is that there are scenarios
where an overall average effect size cannot possibly be reduced to null,
even with very extreme forms of selective publication.

James

On Mon, Feb 28, 2022 at 2:28 PM Lukasz Stasielowicz <
lukasz.stasielowicz at uni-osnabrueck.de> wrote:

            

  
  
#
Hi all,

To add the Copas selection model to the models already suggested:

This model combines the usual random effects model with a selection 
model that models how the probability of publication depends on both a 
study's effect size and its standard error. In a sensitivity analysis it 
is investigated how effect estimates are expected to change with 
increasing level of selection. A goodness-of-fit test provides a 
plausible selection level, along with a corrected effect size, given the 
data.

The Copas selection model is implemented in R package metasens 
https://cran.r-project.org/web/packages/metasens/

For the implementation, see Carpenter et al. 
https://www.jclinepi.com/article/S0895-4356(08)00348-X/fulltext .

For a simulation study, see 
https://onlinelibrary.wiley.com/doi/10.1002/bimj.201000151 .

Best,

Gerta


Am 28.02.2022 um 21:44 schrieb James Pustejovsky: