Magnus,
Following up on Wolfgang's reply, here are some pointers to methodological
articles on how this problem plays out (and how to fix it!) with different
effect size metrics:
- Odds ratios: Moreno SG, Sutton AJ, Ades A, et al. Assessment of
regression-based methods to adjust for publication bias through
a comprehensive simulation study. BMC Med Res Methodol. 2009;9(1):17.
https://doi.org/10.1186/1471-2288-9-2
- Raw proportions: Hunter JP, Saratzis A, Sutton AJ, Boucher RH, Sayers
RD, Bown MJ. In meta-analyses of proportion studies, funnel plots were
found to be an inaccurate method of assessing publication bias. J Clin
Epidemiol. 2014;67(8):897-903.
https://doi.org/10.1016/j.jclinepi.2014.03.003
- Hazard ratios: Debray TP, Moons KG, Riley RD. Detecting small-study
effects and funnel plot asymmetry in meta-analysis of survival data: a
comparison of new and existing tests. Res Synth Methods. 2018;9(1):41-50.
https://doi.org/10.1002/jrsm.1266
- Standardized mean differences: Pustejovsky JE, Rodgers MA. Testing for
funnel plot asymmetry of standardized mean differences. Res SynMeth.
2019;1-15 https://doi.org/10.1002/jrsm.1332
James
On Sat, Jun 15, 2019 at 1:36 PM Viechtbauer, Wolfgang (SP) <
wolfgang.viechtbauer at maastrichtuniversity.nl> wrote:
Hi Magnus,
My point was that for certain outcome/effect-size measures, the sampling
variance is a function of the size of the outcome/effect. For example:
- for the raw correlation coefficient, the usual large-sample
approximation to the sampling variance is (1-r^2)^2 / (n-1), which depends
on r
- for the standardized mean difference, the usual large-sample
approximation to the sampling variance is 1/n1 + 1/n2 + d^2 / (2*(n1+n2)),
which depends on d
For other measures, there can also be such dependencies, although
sometimes they are not as obvious.
Hence, if we use a form of the 'regression test' (to check for funnel plot
asymmetry) where we use the sampling variance (or some function thereof,
such as its square root) as the 'predictor', then this can result in
inflated Type I error rates of the regression test. To avoid this problem,
we can use the sample size (or some function thereof, such as its
reciprocal) as the predictor or use an outcome measure where the sampling
variance is not a function of the size of the outcome/effect (e.g., those
that are obtained via a variance-stabilizing transformation, such as the
r-to-z transformed correlation coefficient or the arcsine square root
transformed risk difference).
Best,
Wolfgang
-----Original Message-----
From: R-sig-meta-analysis [mailto:
r-sig-meta-analysis-bounces at r-project.org] On Behalf Of Magnus Magnusson
Sent: Saturday, 15 June, 2019 20:19
To: r-sig-meta-analysis at r-project.org
Subject: [R-meta] Parameter redundancy
Dear all,
I am using the metafor package (rma.mv) and is currently evaluating
publication bias for a multilevel model by using the Eggers regression test.
I saw in a post answered by the package author, Wolfgang Viechtbauer, at
the cross validated forum that for some measures you have to be aware of
potential parameter redundancy (between the measure and the variance of the
measure) when using the test.
I wonder (1) which measures this refers to and (2) how severe this problem
likely is for the judging the outcome of a pub-bias test.
Best wishes,
Magnus Magnusson, postdoc at the Swedish University of Agricultural
Sciences based in Ume?
--------------------------------------------------------------------
Magnus Magnusson
Post doc position at
Department of Wildlife, Fish and Environmental Studies
Swedish University of Agricultural Sciences
SE-901 83 Ume?
Sweden
phone: +46(0)90-7868587
e-post: magnus.magnusson at slu.se