Skip to content
Prev 2349 / 5636 Next

[R-meta] Sample size and continuity correction

Dear nelly,

See my responses below.
Agreed upon? Not that I am aware of. Some may want at least 5 studies (per group or overall), some 10, others may be fine with if one group only contains 1 or 2 studies.
That's a vague question, so I can't really answer this in general. Of course, estimates will be imprecise when k is small (overall or within groups).
If this happens, then the p-value is probably fluctuating around 0.05 (or whatever cutoff is used for declaring results as significant). The difference between p=.06 and p=.04 is (very very unlikely) to be significant (Gelman & Stern, 2006). Or, to use the words of Rosnow and Rosenthal (1989): "[...] surely, God loves the .06 nearly as much as the .05". 

Gelman, A., & Stern, H. (2006). The difference between "significant" and "not significant" is not itself statistically significant. American Statistician, 60(4), 328-331.

Rosnow, R.L. & Rosenthal, R. (1989). Statistical procedures and the justification of knowledge in psychological science. American Psychologist, 44, 1276-1284.
If one is worried about the use of 'continuity corrections', then I think the more appropriate reaction is to use 'exact likelihood' methods (such as using (mixed-effects) logistic regression models or beta-binomial models) instead of switching to risk differences (nothing wrong with the latter, but risk differences are really a fudamentally different effect size measure compared to risk/odds ratios).