Skip to content
Prev 1013 / 5632 Next

[R-meta] How to interpret when the results from model-based standard errors and robust variance estimation do not corroborate with each other

Aki,

Thanks for sharing this further information. Let me suggest one other
thing. From your description, it sounds like you are interested in
comparing the relative effectiveness of different approaches to modifying
an "original" intervention--similar to a network meta-analysis. It also
sounds like the regression model that you are fitting does not distinguish
within-study comparisons between intervention types (e.g., a study that
randomized participants to the original intervention, modification A, or
modification B) from between-study comparisons (e.g., one study compared
the original intervention to modification A, another study compared the
original intervention to modification B).

Fitting a model that compares intervention types without distinguishing
within- from between- will result in estimates that pool across both types
of variation. This might explain why your null findings are at odds with
findings from previous single studies, which would only examine
within-study variation. In a situation like this, I think a good thing to
do would be to examine whether the within-study variation alone. Two
somewhat different ways of doing this would be:
1. Calculate contrasts between pairs of intervention types within each
study, i.e., calculate a new effect size for (B - original) - (A -
original) for each study that includes both the original intervention,
modification A, and modification B. (And similarly for (C - original) - (A
- original), (C - original) - (B - original), etc.) Then conduct univariate
meta-analyses on each of these contrasts. The results will use only
within-study variation in the intervention types. The downsides of this
approach: you'll lose a lot of studies for each contrast of interest, and
the summary meta-analysis for each contrast will be based on potentially a
different set of studies.
2. Create indicator variables for each intervention type, then *center them
by study* (i.e., subtract the mean of each indicator for each study). Run
the meta-regression on the centered indicator variables. This will remove
between-study variation in the intervention types. The downsides are
similar to approach (1), but allow you to keep a slightly larger set of
studies and conduct everything in one analysis, rather than having to run
separate analyses for each contrast.

Best,
James
On Mon, Aug 13, 2018 at 1:14 PM Akifumi Yanagisawa <ayanagis at uwo.ca> wrote: