Lukasz Stasielowicz
Osnabr?ck University
Institute for Psychology
Research methods, psychological assessment, and evaluation
Lise-Meitner-Stra?e 3
49076 Osnabr?ck (Germany)
Twitter: https://twitter.com/l_stasielowicz
Tel.: +49 541 969-7735
On 15.12.2023 12:00, r-sig-meta-analysis-request at r-project.org wrote:
> Send R-sig-meta-analysis mailing list submissions to
> r-sig-meta-analysis at r-project.org
>
> To subscribe or unsubscribe via the World Wide Web, visit
> https://stat.ethz.ch/mailman/listinfo/r-sig-meta-analysis
> or, via email, send a message with subject or body 'help' to
> r-sig-meta-analysis-request at r-project.org
>
> You can reach the person managing the list at
> r-sig-meta-analysis-owner at r-project.org
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of R-sig-meta-analysis digest..."
>
>
> Today's Topics:
>
> 1. effect size estimates distribution and field-specific
> benchmarks (Yefeng Yang)
>
> ----------------------------------------------------------------------
>
> Message: 1
> Date: Thu, 14 Dec 2023 23:40:30 +0000
> From: Yefeng Yang <yefeng.yang1 at unsw.edu.au>
> To: "r-sig-meta-analysis at r-project.org"
> <r-sig-meta-analysis at r-project.org>
> Subject: [R-meta] effect size estimates distribution and
> field-specific benchmarks
> Message-ID:
> <SYCPR01MB54233D332FA8131F3559A0AA9D8CA at SYCPR01MB5423.ausprd01.prod.outlook.com>
>
> Content-Type: text/plain; charset="utf-8"
>
> Dear community,
>
> I have a question about the effect size distribution. It would be great if you would like to share your wisdom or just comment on it.
>
> I briefly describe my question as follows:
>
> I have a collection of effect size estimates of a specific field, say using SMD. Somehow, the dataset is free of publication bias. Now I want to derive the empirical benchmarks to inform the magnitude of the effect size estimates.
> I am clear about the pitfalls of using empirical benchmarks because the interpretation of effect size should be specific to the context of the question/field. But for now, let's discuss the technical approaches to get reliable benchmarks.
>
> At the moment, I am using four approaches:
>
> 1. using the empirical distribution to get the relevant percentiles, say 25, 50, 75th
> 2. using the mixture model to approximate the distribution and get relevant percentiles
> 3. fitting a meta-analysis model to get mean and variance, and then recover the normal distribution to get the relevant percentiles
> 4. fitting a Bayesian MA and get the posterior distribution
>
> My purpose is to see whether different approaches converge in terms of the benchmarks (or more precisely, percentiles). Do you have any other approaches? General comments or suggestions are also welcome.
>
> Regards,
> Yefeng
>
> [[alternative HTML version deleted]]
>
>
>
>
> ------------------------------
>
> Subject: Digest Footer
>
> _______________________________________________
> R-sig-meta-analysis mailing list @ R-sig-meta-analysis at r-project.org
> To manage your subscription to this mailing list, go to:
> https://stat.ethz.ch/mailman/listinfo/r-sig-meta-analysis
>
>
> ------------------------------
>
> End of R-sig-meta-analysis Digest, Vol 79, Issue 20
> ***************************************************