Skip to content
Prev 2381 / 5636 Next

[R-meta] Multivariate meta-analysis - moderator analysis and tau squared

Hi Mika,

Comments below.

James
On Thu, Sep 24, 2020 at 12:45 PM Mika Manninen <mixu89 at gmail.com> wrote:

            
This means that one of your input correlation matrices is not positive
definite, i.e., it is not a valid correlation matrix. It could be due to a
typo or rounding of the entries. You can find the offending matrix using
the following

isPosDef <- function(x) all(eigen(x)$values > 0)
sapply(corlist, isPosDef)

Although it's probably a minor issue, I would still suggest correcting
before proceeding.

#Each level of motivation has its own variance component in the output
I would recommend reporting the tau estimates directly. Because they are
model parameters, they are more meaningful and more directly interpretable
than Q or I-squared.

As far as Q, there are a number of different ways to define Q in
multivariate models (or generally, models with multiple variance
components), some of which are global rather than being specific to the
dimension of the outcome. Do you want to report Q as a global description
of excess heterogeneity, or do you want something specific to the outcome
dimension?
whether it moderates for  *specific* motivation levels?

If it is the former, then you can do a likelihood ratio test comparing the
model with moderator to the model without. You would, however, have to
switch to using ML rather than REML estimation for the variance components.
Syntax as follows:

res_ML <- rma.mv(g, V, mods = ~ factor(motivation) - 1,
                 random = ~ factor(motivation) | study,
                 struct="UN", data = meta,
                 method = "ML")
mod_ML <- update(res_ML, mods = ~ factor(motivation)*I(setting) -1)
anova(res_ML, mod_ML)

If it is the latter, then the first set of syntax gives you coefficient
estimates for the difference in average effect size between setting = 1
versus setting = 0, for each distinct level of motivation, so you can
interpret those coefficients (CIs, t-tests) directly. If you're looking at
the t-tests for all six levels, it would be prudent to use a correction for
multiple comparisons.
If these effect sizes are standardized mean differences, then you'll need
to use a modified measure of precision rather than the variance or standard
error of the effect sizes. Details here:
https://www.jepusto.com/publication/testing-for-funnel-plot-asymmetry-of-smds/

There are at least two ways to implement Egger's test in this setting. One
would be to simply add the modified measure of precision as a predictor. A
significant slope coefficient would be indicative of small-study effects.
Alternately, you could interact the predictor with the levels of motivation
and then report the likelihood ratio test, as with the previous question.
It is hard to say which approach is more powerful generally. Perhaps others
on the listserv have insights.
You could make a forest plot with the points and whiskers in different
color shades corresponding to the levels of motivation. Alternately, make
separate forest plots per level of motivation. Similarly, a funnel plot
with points in different colors corresponding to the levels of motiviation,
or make separate funnels per level of motivation.