Skip to content
Prev 2154 / 5636 Next

[R-meta] Dependant variable in Meta Analysis

See responses below.
b1 is not % change, exp(b1) is. But yes, one could combine estimates of b1 from different studies even if the units of y differ across studies, as long as they only differ by a multiplicative transformation.
In the model ln(y) = b0 + b1 x + e, if x is a dummy variable that distinguishes two groups (e.g., x = 0 for group 1 and x = 1 for group 2), then b1 is the estimated mean difference of log(y) for the two groups. That's similar (but not the same -- see below) to using the log-transformed ratio of means as the effect size measure. See help(escalc) and search for "ROM". Using (mt-mc)/mc would not be correct to use, since b1 is not % change, but log-transformed % change. And log((mt-mc)/mc) = log(mt/mc - 1), which is like ROM, but not quite right (due to the -1).

The reason why using ROM isn't quite right is due to Jensen's inequality (https://en.wikipedia.org/wiki/Jensen's_inequality). b1 in the regression model is mean(log(y) for group 1) - mean(log(y) for group 2). However, you have mean(y for group 1) and mean(y for group 2) and when you compute "ROM" based on this, you get log(mean(y for group 1)) - log(mean(y for group 2)). These two mean differences are not the same. They might not differ greatly though. An example:

set.seed(1234)
x <- c(rep(0,50), rep(1,50))
y <- 100 + 5 * x + rnorm(100, 0, 10)
lm(log(y) ~ x)
mean(log(y)[x==1]) - mean(log(y)[x==0])
log(mean(y[x==1])) - log(mean(y[x==0])) # ROM
escalc(measure="ROM", m1i=mean(y[x==1]), m2i=mean(y[x==0]), sd1i=sd(y[x==1]), sd2i=sd(y[x==0]), n1i=50, n2i=50)

So, with this caveat aside (but discussed as part of the limitations), I would use ROM for those studies. You can also code 'b1 used vs ROM used' as a dummy variable and examine empirically via meta-regression if there are systematic differences between these two cases (although those could stem from other things besides Jensen's inequality).

Best,
Wolfgang