Skip to content

[R-meta] Publication bias/sensitivity analysis in multivariate meta-analysis

8 messages · Huang Wu, James Pustejovsky, Gerta Ruecker +3 more

#
Hi all,

Greetings. I have some questions about publication bias/sensitivity analysis. First, are publication bias and sensivity analysis the same thing? If not, how are they different?
Second, I saw people use funnel plot, fail-safe N, Egger?s regression test to test publication bias (http://www.metafor-project.org/doku.php/features), are these methods applicable to multivariate meta-analysis? Thanks.
Third, what do you recommend to do publication bias/sensivity analysis in multivariate meta-analysis? Thanks

Best wishes
Huang

Sent from Mail<https://go.microsoft.com/fwlink/?LinkId=550986> for Windows 10
#
Dear Huang

Comments in-line
On 13/06/2020 20:57, Huang Wu wrote:
Publication bias is a subset of small study effects where you know the 
aetiology of the small study effects. If you do not then it is safer to 
refer to small study effects. A sensitivity analysis could be almost 
anything but usually it manes fitting the model to one or more data-sets 
similar to the original one. Examples are leave-one-out analysis, or 
using only a subset of supposed higher quality studies.
Yes they are.

Thanks.
I think what analysis you do will depend on the scientific question.

Michael

  
    
  
#
Hi Huang,

Here are two recent studies on methods for detecting small-study effects
and other forms of publication bias in multivariate meta-analysis:

* Hong, C., Salanti, G., Morton, S., Riley, R., Chu, H., Kimmel, S. E., &
Chen, Y. (2018). Testing small study effects in multivariate
meta-analysis. *arXiv
preprint arXiv:1805.09876*. https://arxiv.org/abs/1805.09876
* Rodgers, M. A., & Pustejovsky, J. E. (In Press). Evaluating Meta-Analytic
Methods to Detect Selective Reporting in the Presence of Dependent Effect
Sizes. Psychological Methods, forthcoming.
https://doi.org/10.31222/osf.io/vqp8u

James


On Sun, Jun 14, 2020 at 5:54 AM Michael Dewey <lists at dewey.myzen.co.uk>
wrote:

  
  
#
Hi all, I read this thread, and the topic interests me, but I didn't quite understand your answer :when you say " Publication bias is a subset of small study effects where you know the 
aetiology of the small study effects. If you do not then it is safer to 
refer to small study effects. "
I don't really understand what you mean.I thought publication bias meant that the studies included in a sample of study didn't really account for the whole range of possible effect sizes (with their associated standard error).Is that not what publication bias refers to ? And if it is, how does it also correspond to the definition you gave ?Thank you !Norman.
----- Mail d'origine -----
De: Michael Dewey <lists at dewey.myzen.co.uk>
?: Huang Wu <huang.wu at wmich.edu>, r-sig-meta-analysis at r-project.org
Envoy?: Sun, 14 Jun 2020 12:54:30 +0200 (CEST)
Objet: Re: [R-meta] Publication bias/sensitivity analysis in multivariate meta-analysis

Dear Huang

Comments in-line
On 13/06/2020 20:57, Huang Wu wrote:
Publication bias is a subset of small study effects where you know the 
aetiology of the small study effects. If you do not then it is safer to 
refer to small study effects. A sensitivity analysis could be almost 
anything but usually it manes fitting the model to one or more data-sets 
similar to the original one. Examples are leave-one-out analysis, or 
using only a subset of supposed higher quality studies.
Yes they are.

Thanks.
I think what analysis you do will depend on the scientific question.

Michael

  
    
#
Dear Norman, dear all,

To clarify the notions:

Small-study effects: All effects manifesting themselves as small studies 
having different effects from large studies. The notion was coined by 
Sterne et al. (Sterne, J. A. C., Gavaghan, D., and Egger, M. (2000). 
Publication and related bias in meta-analysis: Power of statistical 
tests and prevalence in the literature.
Journal of Clinical Epidemiology, 53:1119?1129.) Small-study effects are 
seen in a funnel plot as asymmetry.

Reasons for small-study effects may be: Heterogeneity, e.g., small 
studies have selected patients (for example, worse health status); 
publication bias (see below), mathematical artifacts for binary data 
(Schwarzer, G., Antes, G., and Schumacher, M. (2002). Inflation of type 
I error rate in two statistical tests for the detection of publication 
bias in meta-analyses with binary outcomes. Statistics in Medicine, 
21:2465?2477), or coincidence.

Publication bias is one possible reason of small-study effects and means 
that small studies with small, no, or undesired effects are not 
published and therefore not found in the literature. The result is an 
effect estimate that is biased towards large effects.

Sensitivity analysis is a possibility to investigate small-study 
effects. There is an abundance of literature and methods how to do this. 
Well-known models are selection models, e.g. Vevea, J. L. and Hedges, L. 
V. (1995). A general linear model for estimating effect size in the 
presence of publication bias. Psychometrika, 60:419?435 or Copas, J. and 
Shi, J. Q. (2000). Meta-analysis, funnel plots and sensitivity analysis. 
Biostatistics, 1:247?262.

I attach a talk with more details.

Best,

Gerta


Am 15.06.2020 um 02:28 schrieb Norman DAURELLE:
#
Just to add to Gerta's comprehensive reply.

One IPD analysis in which I was involved had a number of small studies 
which were broadly positive and one large one which was effectively 
null. The investigators were convinced that they were very unlikely to 
have missed any other studies and the most likely explanation for the 
small study effect was that the small studies were conducted by 
enthusiasts for the new therapy who often delivered it themselves 
whereas the large study involved many therapists scattered over the 
country who were more likely to represent how it would actually work if 
rolled out. I suspect similar things often happen for complex interventions.

Michael
On 15/06/2020 10:19, Gerta Ruecker wrote:

  
    
  
#
This reminds me a bit about the magnesium treatment meta-analysis where the ISIS-4 "mega-trial" ended up showing essentially a null effect while the collection of smaller studies beforehand showed a beneficial effect. The example was also used by Matthias Egger for illustrating the idea behind the regression test:

Egger, M., & Davey Smith, G. (1995). Misleading meta-analysis: Lessons from ?an effective, safe, simple? intervention that wasn't. British Medical Journal, 310, 752?754.

Egger, M., Davey Smith, G., Schneider, M., & Minder, C. (1997). Bias in meta-analysis detected by a simple, graphical test. British Medical Journal, 315(7109), 629-634.

Best,
Wolfgang
#
Dear Gerta and Michael,thank you for the clarification !Norman
----- Mail d'origine -----
De: Michael Dewey <lists at dewey.myzen.co.uk>
?: Gerta Ruecker <ruecker at imbi.uni-freiburg.de>, Norman DAURELLE <norman.daurelle at agroparistech.fr>
Cc: r-sig-meta-analysis at r-project.org, Huang Wu <huang.wu at wmich.edu>
Envoy?: Mon, 15 Jun 2020 12:44:28 +0200 (CEST)
Objet: Re: [R-meta] Publication bias/sensitivity analysis in multivariate meta-analysis

Just to add to Gerta's comprehensive reply.

One IPD analysis in which I was involved had a number of small studies 
which were broadly positive and one large one which was effectively 
null. The investigators were convinced that they were very unlikely to 
have missed any other studies and the most likely explanation for the 
small study effect was that the small studies were conducted by 
enthusiasts for the new therapy who often delivered it themselves 
whereas the large study involved many therapists scattered over the 
country who were more likely to represent how it would actually work if 
rolled out. I suspect similar things often happen for complex interventions.

Michael
On 15/06/2020 10:19, Gerta Ruecker wrote: