[R-meta] MLMA - shared control group
Please see below. 1) If I get things right, can we copy+paste the matrix code and it always works in similar cases? If ALL studies are structured like what Wolfgang demonstrated based on Gleser & Olkin's chapter, yes. But note that this formula assumes that, for example, all studies have measured their subjects on a single outcome. If you have some studies that in addition to having several treatment groups have more than one outcome, or have used one or more post-tests then, this may not be useful in those cases (although extensions are possible for those cases). One way to avoid all the headache is to guesstimate the correlation among effects due to perhaps several sources of sampling dependence using V <- clubSandwich::impute_covariance_matrix(), and then, feed it into the rma.mv() function via its V argument. There are plenty of examples of this if you search the archives. 2) For meta-regression, we also have to use V, not vi, correct? In the end, you need to input either vi or V. If you use vi, then you're ignoring sampling dependence. If you ignore such sampling dependence, no major harm is done to your estimates of average effects (fixed effects), but your estimate(s) of how variable your effects at each level may be systematically biased (i.e., even if you have a very large dataset, you may not still obtain the true value of heterogeneity). If you don't care about heterogeneity of effect sizes, then knowing about "any correlation among effect sizes" is not necessary, and you can only use vi. But if you do care about heterogeneity of effect sizes, then you can use metafor in conjunction with the clubSandwich package to enter a simple guesstimate of the correlation among effect sizes as explained above. Because you're using a guesstimate for V, it would be a good idea to guard against possible misspecification of your model. So, instead of directly obtaining the results from metafor, you can use clubSandwich package, to take that into account, perhaps using: clubSandwich::coef_test(model, vcov = "CR2") for coefficients clubSandwich::conf_int(model, vcov = "CR2") for CIs clubSandwich::Wald_test(model, vcov = "CR2") for running comparisons among your coefficients I would also use a range of "r" in my "impute_covariance_matrix()" call to make sure my final results are not too sensitive to my choice of "r". For instance, you can plot your fixed and variance components for each version of your model that has used a different "r" and see if your results differ across them. HTH, Reza On Sat, Aug 28, 2021 at 2:39 PM Jorge Teixeira
<jorgemmtteixeira at gmail.com> wrote:
Hi Reza - thanks for the reply.
1) If I get things right, can we copy+paste the matrix code and it always works in similar cases?
dat <- escalc(measure="MD", m1=em, sd1=esd, n1=en, m2=cm, sd2=csd, n2=cn, data=dat)
## correlation matrix
calc.v <- function(x) {
v <- matrix(1/x$n2i[1] + outer(x$yi, x$yi, "*")/(2*x$Ni[1]), nrow=nrow(x), ncol=nrow(x))
diag(v) <- x$vi
v
}
V <- bldiag(lapply(split(dat, dat$study), calc.v))
V
# fit multilevel model
res_mlma <- rma.mv(yi, V, random = ~ 1 | study/obs, data=dat)
2) For meta-regression, we also have to use V, not vi, correct?
Thanks,
Jorge
Reza Norouzian <rnorouzian at gmail.com> escreveu no dia s?bado, 28/08/2021 ?(s) 16:31:
Please see my answers below.
Hey everyone. Regarding MLMA, due to shared control group, I wonder if: 1) Is it enough to code "studies/obs" and we are done? res_mlma <- rma.mv(yi, vi, random = ~ 1 | studies/obs, data=dat)
Unfortunately, no, using random-effects alone doesn't directly account for that source of dependency. See: https://www.metafor-project.org/doku.php/analyses:gleser2009#multiple-treatment_studies; for a good discussion on this.
2) Or after that, do we also need to compute a correlation matrix? I got lost in this part.
This type of dependency needs to be specified in the rma.mv() via the V argument. See the link in the previous answer for details. Also check out the archives to find several discussions on this.
3) When coding for "studies/obs", the best option is to NOT split the number of participants in obs?
Not sure, what you mean here, but `obs` usually denotes the id for each unique row in your data, like: studies obs 1 1 1 2 2 3 2 4 When you fit a model via rma.mv() and specify the random part as "studies/obs", then, a unique random effect for each study and a unique random effect for each row within a study is added to your model. The former accounts for the effects' variation between studies, the latter accounts for effects' variation within studies.
3.1) Any good literature to support that decision in MLMA? It still seems strange to me, as it will inflate the actual real number of participants.
see my previous answer.
Thanks for your time and best wishes,
Jorge
[[alternative HTML version deleted]]
_______________________________________________ R-sig-meta-analysis mailing list R-sig-meta-analysis at r-project.org https://stat.ethz.ch/mailman/listinfo/r-sig-meta-analysis