Skip to content
Prev 4684 / 5636 Next

[R-meta] Non-independent effect sizes for moderator analysis in meta-analysis on odds ratios

Hi Wolfgang,

Thank you for your quick response, which I found very helpful. With regards to the final point, I would just like the meta-analytic effect size estimates for both levels of my moderator expressed as odds ratios, rather than log odds ratios. It is my understanding that when you run a meta-analysis using rma.mv(), the estimates it yields are given as log odds ratios, and I would just like to know if there is a way to back-transform those to odds ratios. The method you pointed me to seems to yield predicted odds ratios for each of the individual studies I analyze, but not to convert the overall meta-analytic effect size estimates back to odds ratios.

Thank you,

Lukas

-----Original Message-----
From: Viechtbauer, Wolfgang (NP) <wolfgang.viechtbauer at maastrichtuniversity.nl> 
Sent: Tuesday, June 13, 2023 1:58 AM
To: R Special Interest Group for Meta-Analysis <r-sig-meta-analysis at r-project.org>
Cc: Sotola, Lukas K [PSYCH] <lksotola at iastate.edu>
Subject: RE: Non-independent effect sizes for moderator analysis in meta-analysis on odds ratios

Dear Lukas,

You are asking about an issue that has been discussed quite extensively on this mailing list, but let me repeat some of the relevant points:

If two odds ratios come from the same sample, then they are not independent. Ignoring this dependency doesn't make your results "biased" (at least not in the sense of how bias is typically defined in statistics); the real issue is that the standard errors of the coefficients in the meta-regression model tend to be too small, leading to inflated Type I error rates and confidence intervals that are too narrow.

To deal with such dependency, one should ideally do several things:

1) Calculate the covariance between the dependent estimates. Just like we can compute the sampling variance of each log odds ratio, we can also compute their covariance. However, doing so if often tricky because the information needed to compute the covariance is typically not reported. Alternatively, one can compute an approximate covariance, making assumptions about the degree of dependency between them (e.g., if the two log odds ratios are assessing a treatment effect at two different timepoints, then they will tend to be more correlated if the two timepoints are closer to each other; or if the two log odds ratios are assessing a treatment effect for two different dichotomous response variables, then they will tend to be more correlated if two variables are also strongly correlated). One can use the vcalc() function to approximate the covariance making an assumption about the degree of correlation. Typically, when 'guestimating' the correlation, one also does a sensitivity analysis, assessing whether the conclusions remain unchanged when different degrees of correlation are assumed.

2) In addition to 1), one should try to account for the dependency that may arise in the underlying true effects (i.e., the true log odds ratios). This can be done via a multilevel/multivariate model. This is what you have done with rma.mv().

3) Finally, one can consider using cluster-robust inference methods (also known as robust variance estimation). However, with a small number of studies, this might not work so well. Alternatively, one can consider using bootstrapping (see https://doi.org/10.1002/jrsm.1554 and the wildmeta package).

See also:

https://wviechtb.github.io/metafor/reference/misc-recs.html#general-workflow-for-meta-analyses-involving-complex-dependency-structures

As for your last question: Not sure what exactly you mean by "the results". Based on the meta-regression model, you can compute predicted effects (log odds ratios), which you can back-transform to odds ratios. This can be done with predict(..., transf=exp).

Best,
Wolfgang