Skip to content
Prev 2602 / 5636 Next

[R-meta] Pre-test Post-test Control design Different N

Dear Marianne,

For studies that have unpaired samples, then you should not be using Morris' formulas. They are for paired samples.

See below for additional comments.

Best,
Wolfgang
So here, you can use what is described in Morris (2008) and Becker (1988). So the effect size measure is:

d_T = (mean_T_2 - mean_T_1) / SD_T_1
d_C = (mean_C_2 - mean_C_1) / SD_C_1 
d = d_T - d_C

with T/C denoting treatment/control and 1/2 the pre/post-test timepoint. Here, mean_T_1 and mean_T_2 are computed from the exact same plots and the same goes for mean_C_1 and mean_C_2.
The formula given in the Cochrane Handbook is for pooling together two *unpaired/independent* samples. If the same plots have been measured twice, then combining the two means/SDs into a single mean/SD is more difficult and requires other formulas (which will depend also on the correlation between the measurements). Unfortunately, I don't have the time right now to look them up or derive those equations.
If it's a mix of unpaired/paired, it gets even more tricky. But for the case where the plots before and the plots after are different, then it's relatively simple. Then mean_T_1 and mean_T_2 are independent, since they are based on different plots. In essence, d_T is then a standardized mean difference for two independent samples. The same goes for d_C. So just compute a regular standardized mean difference (and its sampling variance) for the treatment group plots, a standardized mean difference (and its sampling variance) for the control group plots, and then take their difference as above. The sampling variance of this difference is just the sum of the two sampling variances.
Why not? That's in essence what we do with unpaired data. Of course, there could still be dependencies for other reasons even in unpaired data, but this is usually/typically not a major concern.
As described above.