Skip to content
Prev 5033 / 5636 Next

[R-meta] effect size estimates distribution and field-specific benchmarks

Dear Yefeng,


Effect size benchmarks is an interesting topic.

I will mention one potential question related to the third approach you 
described: Is it really a realistic assumption that the effect sizes are 
normally distributed?

The question follows from the work of Bosco and colleagues, e.g.
Bosco, F. A., Aguinis, H., Singh, K., Field, J. G., & Pierce, C. A. 
(2015). Correlational effect size benchmarks. Journal of Applied 
Psychology, 100(2), 431?449. https://doi.org/10.1037/a0038047

In one table (p. 436), they report percentiles for the distribution of 
effect sizes, which are based on approximately 150,000 effect sizes.
If we compare the distance between the 20th, 50th, and 80th percentiles, 
then it seems that the distribution is not symmetrical:
20th percentile vs. 50 percentile: r = .05 vs. r = .16 [Diff = .11]
80th percentile vs. 50 percentile: r = .36 vs. r = .16 [Diff = .20]
A similar pattern can be seen if we divide effect sizes into categories 
(e.g., construct type).

Of course, it is possible that the effects are distributed normally in 
your dataset or that you have already thought about the normality 
assumption. I just wanted to point out potential questions.

Another issue that comes to mind pertains to the Bayesian analysis. The 
choice of the prior distribution (t, normal, Cauchy, etc.) might 
influence the shape of the posterior distribution. Therefore, one would 
need a strong justification for testing only one prior. One possible 
solution is to conduct sensitivity analyses with different distributions 
and parameter values (mean, sd, etc.). Similar posterior distributions 
would indicate that the results are relatively robust.




Best,
Lukasz
Message-ID: <a350bd69-8772-42bb-9fdf-fbd435eef7f0@uni-osnabrueck.de>
In-Reply-To: <mailman.5037.7.1702638002.22263.r-sig-meta-analysis@r-project.org>