[R-meta] Why does rma.mv does not show the same results as robumeta?
Thank you for your quick response! Is there any good source of information on which option would be the most adequate for meta-analysis with dependencies, i.e. whether one should just use a) rma.mv; b) rma.mv + robust() or clubSandwich() or c) robumeta? Thank you! Best wishes, Catia On Sun, 23 May 2021 at 17:34, Viechtbauer, Wolfgang (SP) <
wolfgang.viechtbauer at maastrichtuniversity.nl> wrote:
Dear C?tia, robumeta uses robust variance estimation. If you want to do the same based on an 'rma.mv' object, you need to use robust() or, even better, the clubSandwich package. See here for examples: https://wviechtb.github.io/metafor/reference/robust.html However, the results still won't be exactly the same. There is at least one post in the archives that discusses the somewhat subtle differences. If you go here: https://www.google.com/search?hl=EN&source=hp&q=site:https://stat.ethz.ch/pipermail/r-sig-meta-analysis you can add some appropriate search strings to find those posts (I believe it was James Pustejovksy that explained this quite thoroughly, so you might want to include 'James' in your search terms). Best, Wolfgang
-----Original Message----- From: R-sig-meta-analysis [mailto:
r-sig-meta-analysis-bounces at r-project.org] On
Behalf Of C?tia Ferreira De Oliveira Sent: Sunday, 23 May, 2021 3:51 To: r-sig-meta-analysis at r-project.org Subject: [R-meta] Why does rma.mv does not show the same results as
robumeta?
Hello, I have conducted a meta-analysis that I am currently analysing looking at
the
relationship between memory and language/literacy and multiple studies
contributed
more than one effect size. I have preregistered doing the analyses in
robumeta.
But I am interested in checking how the results converge across packages
as I am
tempted to use metafor for my next meta-analysis given how easy it is to
plot,
check for publication bias, etc with this package. When running both
models, they
produced different results and I am a bit unsure as to why they are
different. I
know if I look at the estimates it is not that different, but what
surprises me is
the fact that DD has a higher estimate in one model but in the other it
is the DLD
group. Maybe I have done something wrong. Does anyone have any thoughts?
# multilevel model looking at the relationship between memory and
language/literacy;
# multiple studies have contributed multiple effect sizes
head(Data)
rma.model <- rma.mv(yi, vi, mods = ~ factor(Group)-1, random= ~ 1 |
Study/effectsizeID, data=Data)
res
Multivariate Meta-Analysis Model (k = 414; method: REML)
logLik Deviance AIC BIC AICc
-13.0662 26.1323 36.1323 56.2253 36.2805
Variance Components:
estim sqrt nlvls fixed factor
sigma^2.1 0.0109 0.1044 37 no Study
sigma^2.2 0.0082 0.0903 414 no Study/effectsizeID
Test for Residual Heterogeneity:
QE(df = 411) = 588.9613, p-val < .0001
Test of Moderators (coefficients 1:3):
QM(df = 3) = 11.1370, p-val = 0.0110
Model Results:
robu.model <- robu(formula = yi ~ factor(Group)-1, data = Data,
studynum = Study, var.eff.size = vi,
rho = .8, small = TRUE)
print(robu.model)
RVE: Correlated Effects Model with Small-Sample Corrections
Model: yi ~ factor(Group) - 1
Number of studies = 37
Number of outcomes = 414 (min = 1 , mean = 11.2 , median = 6 , max = 52 )
Rho = 0.8
I.sq = 52.35398
Tau.sq = 0.02918897
Thank you!
Best wishes,
Catia
C?tia Margarida Ferreira de Oliveira Psychology PhD Student Department of Psychology, Room B214 University of York, YO10 5DD [[alternative HTML version deleted]]