Skip to content

[R-meta] Redundant predictors

1 message · Wolfgang Viechtbauer

#
Dear Arne,

Please keep the mailing list in cc.

Indeed, adding 'extreme' effects could drive up the heterogeneity to the point that reaching a significant result becomes difficult or even impossible. And yes, you could fix the variance component(s) to avoid this. Alternatively, instead of adding very large effects, one could add effects that have the same size as the average effect estimated from the initial model. That would have the opposite effect, driving down heterogeneity as more and more such effects are added.

Even better would be an approach where we simulate new effects taking the amount of heterogeneity and the sampling variability into consideration. For a given number of 'new' effects to be added, one would then repeat this many times, checking in what proportion of cases the combined effect is significant. By increasing the number of new effects to be added, one could then figure out how many effects need to be added such that power to find a significant effect is at least 80% (or some other %). Here is an example of this idea:

library(metafor)

yi <- c(0.22, -0.12, 0.41, 0.13, 0.08)
vi <- c(0.008, 0.002, 0.019, 0.010, 0.0145)

res <- rma(yi, vi, method="DL")
res

iters <- 1000

maxj <- 20

power <- rep(NA, maxj)
pvals <- rep(NA, iters)

set.seed(42)

for (j in 1:maxj) {
   print(j)
   for (l in 1:iters) {
      yi.fsn <- c(yi, rnorm(j, coef(res), sqrt(res$tau2 + 1/mean(1/vi))))
      vi.fsn <- c(vi, rep(1/mean(1/vi), j))
      pvals[l] <- rma(yi.fsn, vi.fsn, method="DL")$pval
   }
   power[j] <- mean(pvals <= .05)
}

plot(1:maxj, power, type="o")
abline(h=.80, lty="dotted")
min(which(power >= .80))

So, 15 effects would have to be added to reach 80% power. Note that the line is a bit wiggly, but one could just increase the number of iterations to smooth it out.

Best,
Wolfgang