Skip to content

[R-meta] Question about Meta analysis

3 messages · Wolfgang Viechtbauer, Maximilian Steininger

#
Dear Max,

Please see below for my responses.

Best,
Wolfgang
Thanks for the kind feedback.
Thanks for the reproducible example. I had a look:

blsplit(V, dat$studyid)
blsplit(V, dat$studyid, cov2cor)

So, study 1 is just a single row and its sampling variance is as given (0.05).

In study 2 the correlation between the two effects should be around 0.5ish (it would be exactly 0.5 if you had not specified w1 and w2) due to a shared control group.

In study 3, there is just a single row. Just to be clear: You are referring to a 'test-retest r = 0.9' but this has no bearing on the sampling variance in V. If it is a within-study design, the computation of its sampling variance already should have been done in such a way that any pre-post correlation is accounted for.

I am trying to understand your coding for study 4 ("Within-study with one control and two intervention conditions"), which you coded as follows:

   studyid esid design subgroup type time1 time2 grp1 grp2  ne nc  yi   vi
5        4    5      2        1    1     1     2    e    c  40 40 0.5 0.05
6        4    6      2        1    1     1     3    e    c  40 40 0.6 0.05

But this coding implies that there are two independent groups, e and c, where e was measured at time point 1 and c at time points 2 and 3. I am not sure if I really understand this design.

In study 5, there are two subgroups. Since there is (presumably) no overlap of subjects across subgroups, the sampling errors across subgroups are independent, so we just have two cases of what we have in study 2.

For study 6, your coding is:

   studyid esid design subgroup type time1 time2 grp1 grp2  ne nc  yi   vi
11       6   11      2        1    1     1     2    c    c  90 90 1.1 0.05
12       6   12      2        1    2     1     2    c    c  90 90 1.2 0.05

But I think the coding should be:

   studyid esid design subgroup type time1 time2 grp1 grp2  ne nc  yi   vi
11       6   11      2        1    1     1     2    e    e  90 90 1.1 0.05
12       6   12      2        1    2     1     2    e    e  90 90 1.2 0.05

although this makes no difference to V. Note that r = 0.9 is again irrelevant here, but for a different reason since it happens to cancel out in the computation of the covariance.
See above.
I recently added measure "SMCRP" to escalc(). This uses the pooled SD from pre and post to standardize the difference.
#
Dear Wolfgang,

as always many thanks!
I see, that clears things up.
I guess in that case I just mis-specified. If it is a pure within-design (always same subjects in every condition) then the coding for both grp1 and grp2 is supposed to always have the same value (so ?e" for each cell)? Seems like I got that wrong, thanks for making me aware.
Makes sense.
Agreed, and by specifying  "random = ~ 1 | studyid/esid?  in my model the dependency of the effect sizes from that study should be taken care of.
Great! Just out of curiosity is it based on the approach by Cousineau, 2020? doi: 10.20982/tqmp.16.4.p418

Thanks a lot for taking the time and your help!

Best,
Max
#
Please see my responses below.

Best,
Wolfgang
Yes, correct. Same letter/number = same group of subjects.
Yes, correct. With this, you are modeling dependency in the underlying true effects, which can still be present even if the sampling errors are independent.
Yes -- see: https://wviechtb.github.io/metafor/reference/escalc.html#-a-measures-for-quantitative-variables-2