Skip to content
Prev 5078 / 5636 Next

[R-meta] Interpreting the overall effect adjusted for publication bias, in the absence of publication bias

Hi Daniel,

The intercept in PET is an extrapolation to a study with an infinite sample size (i.e., where the standard error / sampling variance is equal to 0). Given that the studies are typically far away from having an infinitely large sample size, such an extrapolation leads to a large SE for the intercept term and hence a very wide CI / low power for the test of H0: intercept = 0.

Here is an illustration of this issue:

library(metafor)

k <- 20

iters <- 10000

pval1 <- rep(NA, iters)
pval2 <- rep(NA, iters)

for (i in 1:iters) {

   # simulate data (without any publication bias)
   vi <- runif(k, .01, .1)
   yi <- rnorm(k, 0.2, sqrt(vi))

   # fit the standard equal-effects model and save the p-value
   res1 <- rma(yi, vi, method="EE")
   pval1[i] <- res1$pval

   # fit a meta-regression model with the standard errors as predictor and
   # save the p-value for the intercept term
   res2 <- rma(yi, vi, mods = ~ sqrt(vi), method="FE")
   pval2[i] <- res2$pval[1]

}

# power of the tests
mean(pval1 <= .05)
mean(pval2 <= .05)

I kept things simple by not simulating any heterogeneity. Roughly, the data were simulated as if we are dealing with standardized mean differences where studies have sample sizes between 40 and 400 participants and the true standardized mean difference is 0.2. The standard equal-effects model has almost 100% power, while the test of the intercept in the meta-regression model has only around 28% power. Quite a dramatic difference.

So, unless there is evidence of publication bias, I would caution to use the significance of the intercept term (or some other 'corrected' estimate) for decision making.

Best,
Wolfgang
Message-ID: <AS8PR08MB91934A7C71FC02ED2C02CD048B512@AS8PR08MB9193.eurprd08.prod.outlook.com>
In-Reply-To: <YT1PR01MB4012C516E414FC3B2AF8A293EC722@YT1PR01MB4012.CANPRD01.PROD.OUTLOOK.COM>