Skip to content
Prev 2144 / 5636 Next

[R-meta] Dependant variable in Meta Analysis

Assuming that the coefficients are commensurable, you can just meta-analyze them directly. The squared standard errors of the coefficients are then the sampling variances.

With commensurable, I mean that they measure the same thing and can be directly compared. For example, suppose the regression model y = b0 + b1 x + e has been examined in multiple studies. Since b1 reflects how many units y changes (on average) for a one-unit increase in x, the coefficient b1 is only comparable across studies if y has been measured in the same units across studies and x has been measured in the same units across studies (or if there is a known linear transformation that converts x from one study into the x from another study (and the same for y), then one can adjust b1 to make it commensurable across studies).

In certain models, one can relax the requirement that the units must be the same. For example, if the model is ln(y) = b0 + b1 x + e, then the units of y can actually differ across studies if they are multiplicative transformations of each other. If the model is ln(y) = b0 + b1 ln(x) + e, then x can also differ across studies in terms of a multiplicative transformation.

I think the latter gets close to (or is?) what people in economics do to estimate 'elasticities' and this is in fact what you might be dealing with.

Another complexity comes into play when there are other x's in the model. Strictly speaking, all models should include the same set of predictors as otherwise the coefficient of interest is 'adjusted for' different sets of covariates, which again makes it incommensurable. As a rough approximation to deal with different sets of covariates across studies, one could fit a meta-regression model (with the coefficient of interest as outcome) where one uses dummy variables to indicate for each study which covariates were included in the original regression models.

Best,
Wolfgang