Skip to content
Prev 2386 / 5636 Next

[R-meta] Multivariate meta-analysis - moderator analysis and tau squared

James,

Thank you very much for your detailed answer!

I took some time to learn (first time doing the multivariate analyses) and
I would have 4 quick follow-up questions.

1. Follow-up on moderators (especially categorical moderators with more
than two levels).
2. About calculating the I squared and the need for Qs
3. About influential study diagnostics (reestimate = FALSE)
4. About calculating the modified precision estimate for the Egger's type
test based on
https://www.jepusto.com/publication/testing-for-funnel-plot-asymmetry-of-smds/

The data and main analysis can be generated with the code from my original
message (sorry about the correlations being all over the place).

The follow-up questions and related code are presented below
_____________________________________

###1. About moderators

#The moderators were selected a priori and the purpose was to all of the
moderators for each level of motivation (increased risk noted!)

#In my understanding here the latter half of the output is the difference
to the corresponding motivation level on the first half (categorical
moderator level 0)
setting_res <- rma.mv(g, V, mods = ~ factor(motivation)*I(setting) -1,
random = ~ factor(motivation) | study, struct="UN", data=meta)
setting_res

#Is there a simple way to get the mean effect and its CIs for all the
motivations levels and setting levels? Usually the -1 addition
#provides this, but is it that with the multivariate model, the level 1
moderator effects need to be subtracted/added from the level 0 to have
#mean effects for all the levels of motivation with different moderators?

#Also, there are a few moderators with more than two levels (example with
three). Would the following be the correct way to run the analysis in this
case?
#Again, would the level 1 and level 2 estimates be the differences from the
level 0 estimates? So, the mean effects sizes for all the levels can be
gained by subtraction/addition to and from the level 0

length_res <- rma.mv(g, V, mods = ~ factor(motivation)*factor(length)-1,
random = ~ factor(motivation) | study, struct="UN", data=meta, method =
"ML")
length_res

### 2. About calculating the I squared
#Which of these three methods would you recommend using? Also, if I am
reporting taus and I squared, would not this suffice and Qs could be left
unreported?

#Constructing a block diagonal of the variance-covariance matrix (V from
the original message).

c <- bldiag(V)

##I2 computations from metafor-project.org

#1st option (solving W, X, and P)

W <- solve(c)
X <- model.matrix(res)
P <- W - W %*% X %*% solve(t(X) %*% W %*% X) %*% t(X) %*% W
100 * res$tau2 / (res$tau2 + (res$k-res$p)/sum(diag(P)))

#2nd option. P comes from the code above (not sure if I computed the W
correct by assigning the bldiag function - constructs a block diagonal
matrix) )

c(100 * res$tau2[1] / (res$tau2[1] + (sum(meta$motivation ==
1)-1)/sum(diag(P)[meta$motivation == 1])),
  100 * res$tau2[2] / (res$tau2[2] + (sum(meta$motivation ==
2)-1)/sum(diag(P)[meta$motivation == 2])),
  100 * res$tau2[3] / (res$tau2[3] + (sum(meta$motivation ==
3)-1)/sum(diag(P)[meta$motivation == 3])),
  100 * res$tau2[4] / (res$tau2[4] + (sum(meta$motivation ==
4)-1)/sum(diag(P)[meta$motivation == 4])),
  100 * res$tau2[5] / (res$tau2[5] + (sum(meta$motivation ==
5)-1)/sum(diag(P)[meta$motivation == 5])),
  100 * res$tau2[6] / (res$tau2[6] + (sum(meta$motivation ==
6)-1)/sum(diag(P)[meta$motivation == 6])))

#3rd option (Jackson 2012) ( %s lower than with the previous two approaches)

res.R <- rma.mv(g, V, mods = ~ factor(motivation) - 1, random = ~
factor(motivation) | study, struct="UN", data=meta)
res.R
res.F <- rma.mv(g, V, mods = ~ factor(motivation) - 1, data=meta)

c(100 * (vcov(res.R)[1,1] - vcov(res.F)[1,1]) / vcov(res.R)[1,1],
  100 * (vcov(res.R)[2,2] - vcov(res.F)[2,2]) / vcov(res.R)[2,2],
  100 * (vcov(res.R)[3,3] - vcov(res.F)[3,3]) / vcov(res.R)[3,3],
  100 * (vcov(res.R)[4,4] - vcov(res.F)[4,4]) / vcov(res.R)[4,4],
  100 * (vcov(res.R)[5,5] - vcov(res.F)[5,5]) / vcov(res.R)[5,5],
  100 * (vcov(res.R)[6,6] - vcov(res.F)[6,6]) / vcov(res.R)[6,6])

### 3. About influential studies
#Is setting the reestimate to FALSE on cooks.distance and dfbetas okay?
Otherwise, the diagnostics resulted in NAs.

#Cooks distance on each outcome (Mahalanobis distance)

cd <- cooks.distance(res, reestimate = FALSE)
cd
plot(cd, type="o", pch=19)

#Cooks distance clustered by study

cd <- cooks.distance(res, ncpus = 1, cluster = meta$study, reestimate =
FALSE)
cd
plot(cd, type="o", pch=19)

#Dfbetas (Change in sds)

dfb <- dfbetas(res, ncpus = 1, reestimate = FALSE)
dfb
plot(dfb, type="o", pch=19)

#Hatvalues

hats <- hatvalues(res, ncpus = 1)
hats
plot(hats, type="o", pch=19)


### 4. About calculating a modified precision estimate for Egger's type
test
#Could mprc (after running the code) be used in the Egger's test as a
moderator/modified precision estimate to keep up with the articles
recommendations?

weight <- weights(res, type = "diagonal")
View(weight)

mprec <- sqrt(weight)
View(mprec)

###Thank you very much in advance!###



to 24. syysk. 2020 klo 19.21 James Pustejovsky (jepusto at gmail.com)
kirjoitti: