-----Original Message-----
From: Divya Ravichandar [mailto:divya at secondgenome.com]
Sent: Wednesday, 29 April, 2020 20:15
To: Viechtbauer, Wolfgang (SP)
Cc: r-sig-meta-analysis at r-project.org
Subject: Re: [R-meta] Inner|outer model vs multiple random id terms in
rma.mv
Thank you Prof.Wolfgang. I was wondering how one would interpret negative
rho (does this imply there is negative correlation between the inner
levels?)
Also for a case where rho is negative is there a preference?on whether the
`~inner| outer` or? `~1| inner , ~1|outer` is more applicable?
On Wed, Apr 29, 2020 at 10:45 AM Viechtbauer, Wolfgang (SP)
<wolfgang.viechtbauer at maastrichtuniversity.nl> wrote:
Hi Divya,
These two formulations will only yield the same results when rho is
estimated to be >= 0 (which is not the case in the second example).
Best,
Wolfgang
-----Original Message-----
From: R-sig-meta-analysis [mailto:r-sig-meta-analysis-bounces at r-
On Behalf Of Divya Ravichandar
Sent: Wednesday, 29 April, 2020 19:00
To: r-sig-meta-analysis at r-project.org
Subject: Re: [R-meta] Inner|outer model vs multiple random id terms in
rma.mv
Hi all
Following a recommendation from Prof.Wolfgang to make access to input data
easier, I have reformatted the above example to avoid using an external csv
file and such.
Hi all,
I am trying to understand why results from running a model of the form
~lvl1|lv2 are not comparable to results of running ~1 | lvl1 ,~ 1 | lvl2
In a simple example (case_simple in code below),results of the 2 models are
comparable as expected.
However, when running the 2 models on a more complex example (case_complex)
markedly different results are obtained with ~ Dataset | Cohort estimating
a pvalue of .02 and list(~ 1 | Dataset,~ 1 | Cohort) estimating a pvalue of
.2
Thank you
*Reproducible example*
library(metafor)
# example where results of the 2 models agree
case_simple <- data.frame(Dataset=
c("a","b","c","d"),Cohort=c("c1","c1","c2","c3"), Tech=
c("a1","a2","a1","a1"),Effect_size=c(-1.5,-
3,1.5,3),Standard_error=c(.2,.4,.2,.4))
res1 = rma.mv(Effect_size, Standard_error^2, random = list(~ 1 | Dataset,~
1 | Cohort), data=case_simple)
res2=rma.mv(Effect_size, Standard_error^2, random = ~ Dataset | Cohort,
data=case_simple)
# example where results of the 2 models DONT agree
case_complex <-
data.frame(Dataset=c("Dt1","Dt2","Dt3","Dt4","Dt5","Dt5","Dt6","Dt7","Dt8",
Dt9"),Cohort=c("C1","C2",rep("C3",5),rep("C4",2),"C5"),
Effect_size=c(-0.002024454,-0.003915314,-0.042282757,-1.43826175,-
0.045423574,-0.17682309,-21.72691245,-2.559727204,-0.091972279,-
0.763332081),
Standard_error=c(0.15283972,0.117452325,0.262002289,0.555230971,0.708917912
0.682989908,2.704749864,1.40514335,0.735696048,0.713557015))
res1 = rma.mv(Effect_size, Standard_error^2, random = list(~ 1 | Dataset,~
1 | Cohort), data=case_complex)
res2=rma.mv(Effect_size, Standard_error^2, random = ~ Dataset | Cohort,
data=case_complex)
On Wed, Apr 22, 2020 at 9:51 AM Divya Ravichandar <divya at secondgenome.com>
wrote:
Hi all,
I am trying to understand why results from running a model of the form
~lvl1|lv2 are not comparable to results of running ~1 | lvl1 ,~ 1 | lvl2
In a simple example case below,results of the 2 models are comparable as
expected.
```case <- data.frame(Dataset=
c("a","b","c","d"),Cohort=c("c1","c1","c2","c3"), Tech=
c("a1","a2","a1","a1"),Effect_size=c(-1.5,-
3,1.5,3),Standard_error=c(.2,.4,.2,.4))
res1 = rma.mv(Effect_size, Standard_error^2, random = list(~ 1 |
Dataset,~ 1 | Cohort), data=case)
res2=rma.mv(Effect_size, Standard_error^2, random = ~ Dataset | Cohort,
data=case)
```
However, when running the 2 model on a more complex example [attached]
markedly different results are obtained with ~ Dataset | Cohort
estimating a pvalue of .02 and list(~ 1 | Dataset,~ 1 | Cohort)
estimating a pvalue of .2
--
*Divya Ravichandar*
Scientist
Second Genome