Skip to content
Prev 2193 / 5636 Next

[R-meta] Publication bias/sensitivity analysis in multivariate meta-analysis

Dear Norman, dear all,

To clarify the notions:

Small-study effects: All effects manifesting themselves as small studies 
having different effects from large studies. The notion was coined by 
Sterne et al. (Sterne, J. A. C., Gavaghan, D., and Egger, M. (2000). 
Publication and related bias in meta-analysis: Power of statistical 
tests and prevalence in the literature.
Journal of Clinical Epidemiology, 53:1119?1129.) Small-study effects are 
seen in a funnel plot as asymmetry.

Reasons for small-study effects may be: Heterogeneity, e.g., small 
studies have selected patients (for example, worse health status); 
publication bias (see below), mathematical artifacts for binary data 
(Schwarzer, G., Antes, G., and Schumacher, M. (2002). Inflation of type 
I error rate in two statistical tests for the detection of publication 
bias in meta-analyses with binary outcomes. Statistics in Medicine, 
21:2465?2477), or coincidence.

Publication bias is one possible reason of small-study effects and means 
that small studies with small, no, or undesired effects are not 
published and therefore not found in the literature. The result is an 
effect estimate that is biased towards large effects.

Sensitivity analysis is a possibility to investigate small-study 
effects. There is an abundance of literature and methods how to do this. 
Well-known models are selection models, e.g. Vevea, J. L. and Hedges, L. 
V. (1995). A general linear model for estimating effect size in the 
presence of publication bias. Psychometrika, 60:419?435 or Copas, J. and 
Shi, J. Q. (2000). Meta-analysis, funnel plots and sensitivity analysis. 
Biostatistics, 1:247?262.

I attach a talk with more details.

Best,

Gerta


Am 15.06.2020 um 02:28 schrieb Norman DAURELLE: