Dear Wolfgang and All, I conducted a multilevel mixed-effects meta-analysis and found differences between levels of two moderators. I was expecting to find non-overlapped confidence intervals. However, I obtained overlapped confidence intervals for all subgroups. How can I interpret these results? In such situation, should I trust in the Q-test or in the CIs? I controlled for phylogenetic non-independence. Is there a chance of this approach affect the estimation of CIs using predict function? My dataset and script are attached. Best wishes, _______________________________________________________ *Prof. Dr. Rafael Rios Moura* Coordenador de Pesquisa e do NEPEE/CNPq Laborat?rio de Ecologia e Zoologia (LEZ) UEMG - Unidade Ituiutaba ORCID: http://orcid.org/0000-0002-7911-4734 Curr?culo Lattes: http://lattes.cnpq.br/4264357546465157 <http://orcid.org/0000-0002-7911-4734> <http://lattes.cnpq.br/4264357546465157> <http://lattes.cnpq.br/4264357546465157>Research Gate: https://www.researchgate.net/profile/Rafael_Rios_Moura2 <http://orcid.org/0000-0002-7911-4734> Rios de Ci?ncia: https://www.youtube.com/channel/UCu2186wIJKji22ai8tvlUfg <http://orcid.org/0000-0002-7911-4734> -------------- next part -------------- An HTML attachment was scrubbed... URL: <https://stat.ethz.ch/pipermail/r-sig-meta-analysis/attachments/20200601/1045f130/attachment-0001.html> -------------- next part -------------- A non-text attachment was scrubbed... Name: script.R Type: application/octet-stream Size: 1896 bytes Desc: not available URL: <https://stat.ethz.ch/pipermail/r-sig-meta-analysis/attachments/20200601/1045f130/attachment-0002.obj> -------------- next part -------------- A non-text attachment was scrubbed... Name: pruned_super-tree.tre Type: application/octet-stream Size: 15520 bytes Desc: not available URL: <https://stat.ethz.ch/pipermail/r-sig-meta-analysis/attachments/20200601/1045f130/attachment-0003.obj> -------------- next part -------------- A non-text attachment was scrubbed... Name: dataset.csv Type: application/vnd.ms-excel Size: 177969 bytes Desc: not available URL: <https://stat.ethz.ch/pipermail/r-sig-meta-analysis/attachments/20200601/1045f130/attachment-0001.xlb>
[R-meta] Overlapping CIs with significant difference among subgroups
9 messages · Rafael Rios, Wolfgang Viechtbauer, Gerta Ruecker
Dear Rafael, CIs can overlap and yet the difference between the two levels can be significant. See, for example: https://towardsdatascience.com/why-overlapping-confidence-intervals-mean-nothing-about-statistical-significance-48360559900a?gi=b673a691634d https://www.psychologicalscience.org/observer/understanding-confidence-intervals-cis-and-effect-size-estimation https://blog.minitab.com/blog/real-world-quality-improvement/common-statistical-mistakes-you-should-avoid and many more (just google for "test difference overlapping confidence intervals" or something along those lines). They don't talk about meta-analysis per se, but it's the same principle. So, you can trust the test of the difference between the levels of the moderators. Best, Wolfgang
-----Original Message----- From: Rafael Rios [mailto:biorafaelrm at gmail.com] Sent: Monday, 01 June, 2020 21:54 To: r-sig-meta-analysis at r-project.org; Viechtbauer, Wolfgang (SP) Subject: Overlapping CIs with significant difference among subgroups ATTACHMENT(S) REMOVED: dataset.csv | pruned_super-tree.tre | script.R Dear Wolfgang and All, I conducted a multilevel mixed-effects meta-analysis and found differences between levels of two moderators. I was expecting to find non-overlapped confidence intervals. However, I obtained overlapped confidence intervals for all subgroups. How can I interpret these results? In such situation, should I trust in the Q-test or in the CIs? I controlled for phylogenetic non-independence. Is there a chance of this approach affect the estimation of CIs using predict function? My dataset and script are attached. Best wishes,
_______________________________________________________ Prof. Dr. Rafael Rios Moura Coordenador de Pesquisa e do NEPEE/CNPq Laborat?rio de Ecologia e Zoologia (LEZ) UEMG - Unidade Ituiutaba ORCID:?http://orcid.org/0000-0002-7911-4734 Curr?culo Lattes: http://lattes.cnpq.br/4264357546465157 Research Gate: https://www.researchgate.net/profile/Rafael_Rios_Moura2 Rios de Ci?ncia:?https://www.youtube.com/channel/UCu2186wIJKji22ai8tvlUfg
Dear Dr. Wolfgang, Thank you very much! Since confidence intervals are not very informative to exhibit diferences between subgroups, why is this practice so common among meta-analysts? Why not to present standard errors instead of CIs? Best wishes, Rafael. Em ter, 2 de jun de 2020 ?s 03:48, Viechtbauer, Wolfgang (SP) < wolfgang.viechtbauer at maastrichtuniversity.nl> escreveu:
Dear Rafael, CIs can overlap and yet the difference between the two levels can be significant. See, for example: https://towardsdatascience.com/why-overlapping-confidence-intervals-mean-nothing-about-statistical-significance-48360559900a?gi=b673a691634d https://www.psychologicalscience.org/observer/understanding-confidence-intervals-cis-and-effect-size-estimation https://blog.minitab.com/blog/real-world-quality-improvement/common-statistical-mistakes-you-should-avoid and many more (just google for "test difference overlapping confidence intervals" or something along those lines). They don't talk about meta-analysis per se, but it's the same principle. So, you can trust the test of the difference between the levels of the moderators. Best, Wolfgang
-----Original Message----- From: Rafael Rios [mailto:biorafaelrm at gmail.com] Sent: Monday, 01 June, 2020 21:54 To: r-sig-meta-analysis at r-project.org; Viechtbauer, Wolfgang (SP) Subject: Overlapping CIs with significant difference among subgroups ATTACHMENT(S) REMOVED: dataset.csv | pruned_super-tree.tre | script.R Dear Wolfgang and All, I conducted a multilevel mixed-effects meta-analysis and found differences between levels of two moderators. I was expecting to find non-overlapped confidence intervals. However, I obtained overlapped confidence intervals for all subgroups. How can I interpret these results? In such situation, should I trust in the Q-test or in the CIs? I controlled for phylogenetic non-independence. Is there a chance of this approach affect the estimation of CIs using predict function? My dataset and script are attached. Best wishes,
_______________________________________________________ Prof. Dr. Rafael Rios Moura Coordenador de Pesquisa e do NEPEE/CNPq Laborat?rio de Ecologia e Zoologia (LEZ) UEMG - Unidade Ituiutaba ORCID: http://orcid.org/0000-0002-7911-4734 Curr?culo Lattes: http://lattes.cnpq.br/4264357546465157 Research Gate: https://www.researchgate.net/profile/Rafael_Rios_Moura2 Rios de Ci?ncia: https://www.youtube.com/channel/UCu2186wIJKji22ai8tvlUfg
_______________________________________________________ *Prof. Dr. Rafael Rios Moura* Coordenador de Pesquisa e do NEPEE/CNPq Laborat?rio de Ecologia e Zoologia (LEZ) UEMG - Unidade Ituiutaba ORCID: http://orcid.org/0000-0002-7911-4734 Curr?culo Lattes: http://lattes.cnpq.br/4264357546465157 <http://orcid.org/0000-0002-7911-4734> <http://lattes.cnpq.br/4264357546465157> <http://lattes.cnpq.br/4264357546465157>Research Gate: https://www.researchgate.net/profile/Rafael_Rios_Moura2 <http://orcid.org/0000-0002-7911-4734> Rios de Ci?ncia: https://www.youtube.com/channel/UCu2186wIJKji22ai8tvlUfg <http://orcid.org/0000-0002-7911-4734> [[alternative HTML version deleted]]
Dear Rafael, What specifically do you mean by "this practice"? Presenting estimated (average) effects with their CIs when subgrouping the studies based on some categorical variable? Indeed, one cannot directly infer based on the CIs whether the subgroups are actually different from each other. For this, one should conduct a proper test of subgroup differences. One can also directly test whether the difference between two effects is significant or not or present an estimate of the difference between two effects with a corresponding CI (and if that CI excludes 0, then one knows that the test of the difference is significant at alpha = (100 - CI level)/100). But I see nothing generally wrong with the practice of presenting subgroup effects with CIs. Best, Wolfgang
-----Original Message----- From: Rafael Rios [mailto:biorafaelrm at gmail.com] Sent: Wednesday, 03 June, 2020 5:27 To: Viechtbauer, Wolfgang (SP) Cc: r-sig-meta-analysis at r-project.org Subject: Re: Overlapping CIs with significant difference among subgroups Dear Dr. Wolfgang, Thank you very much! Since confidence intervals are not very informative to exhibit diferences between subgroups, why is this practice so common among meta-analysts? Why not to present standard errors instead of CIs? Best wishes, Rafael. Em ter, 2 de jun de 2020 ?s 03:48, Viechtbauer, Wolfgang (SP) <wolfgang.viechtbauer at maastrichtuniversity.nl> escreveu: Dear Rafael, CIs can overlap and yet the difference between the two levels can be significant. See, for example: https://towardsdatascience.com/why-overlapping-confidence-intervals-mean- nothing-about-statistical-significance-48360559900a?gi=b673a691634d https://www.psychologicalscience.org/observer/understanding-confidence- intervals-cis-and-effect-size-estimation https://blog.minitab.com/blog/real-world-quality-improvement/common- statistical-mistakes-you-should-avoid and many more (just google for "test difference overlapping confidence intervals" or something along those lines). They don't talk about meta- analysis per se, but it's the same principle. So, you can trust the test of the difference between the levels of the moderators. Best, Wolfgang
-----Original Message----- From: Rafael Rios [mailto:biorafaelrm at gmail.com] Sent: Monday, 01 June, 2020 21:54 To: r-sig-meta-analysis at r-project.org; Viechtbauer, Wolfgang (SP) Subject: Overlapping CIs with significant difference among subgroups ATTACHMENT(S) REMOVED: dataset.csv | pruned_super-tree.tre | script.R Dear Wolfgang and All, I conducted a multilevel mixed-effects meta-analysis and found differences between levels of two moderators. I was expecting to find non-overlapped confidence intervals. However, I obtained overlapped confidence intervals for all subgroups. How can I interpret these results? In such situation, should I trust in the Q-test or in the CIs? I controlled for phylogenetic non-independence. Is there a chance of this approach affect the estimation of CIs using predict function? My dataset and script are attached. Best wishes,
_______________________________________________________ Prof. Dr. Rafael Rios Moura Coordenador de Pesquisa e do NEPEE/CNPq Laborat?rio de Ecologia e Zoologia (LEZ) UEMG - Unidade Ituiutaba ORCID:?http://orcid.org/0000-0002-7911-4734 Curr?culo Lattes: http://lattes.cnpq.br/4264357546465157 Research Gate: https://www.researchgate.net/profile/Rafael_Rios_Moura2 Rios de Ci?ncia:?https://www.youtube.com/channel/UCu2186wIJKji22ai8tvlUfg
--
_______________________________________________________
Dear Dr. Wolfgang, Thank you for the feedback. I was wondering why meta-analysts did not exhibit standard errors instead of confidence intervals in graphs. I can understand the importance of showing that CIs did not include zero, but standard errors can be more informative when comparing subgroups of a moderator. This is just a curiosity. Best wishes, Rafael. Em qua, 3 de jun de 2020 ?s 05:02, Viechtbauer, Wolfgang (SP) < wolfgang.viechtbauer at maastrichtuniversity.nl> escreveu:
Dear Rafael, What specifically do you mean by "this practice"? Presenting estimated (average) effects with their CIs when subgrouping the studies based on some categorical variable? Indeed, one cannot directly infer based on the CIs whether the subgroups are actually different from each other. For this, one should conduct a proper test of subgroup differences. One can also directly test whether the difference between two effects is significant or not or present an estimate of the difference between two effects with a corresponding CI (and if that CI excludes 0, then one knows that the test of the difference is significant at alpha = (100 - CI level)/100). But I see nothing generally wrong with the practice of presenting subgroup effects with CIs. Best, Wolfgang
-----Original Message----- From: Rafael Rios [mailto:biorafaelrm at gmail.com] Sent: Wednesday, 03 June, 2020 5:27 To: Viechtbauer, Wolfgang (SP) Cc: r-sig-meta-analysis at r-project.org Subject: Re: Overlapping CIs with significant difference among subgroups Dear Dr. Wolfgang, Thank you very much! Since confidence intervals are not very informative
to
exhibit diferences between subgroups, why is this practice so common among meta-analysts? Why not to present standard errors instead of CIs? Best wishes, Rafael. Em ter, 2 de jun de 2020 ?s 03:48, Viechtbauer, Wolfgang (SP) <wolfgang.viechtbauer at maastrichtuniversity.nl> escreveu: Dear Rafael, CIs can overlap and yet the difference between the two levels can be significant. See, for example: https://towardsdatascience.com/why-overlapping-confidence-intervals-mean- nothing-about-statistical-significance-48360559900a?gi=b673a691634d https://www.psychologicalscience.org/observer/understanding-confidence- intervals-cis-and-effect-size-estimation https://blog.minitab.com/blog/real-world-quality-improvement/common- statistical-mistakes-you-should-avoid and many more (just google for "test difference overlapping confidence intervals" or something along those lines). They don't talk about meta- analysis per se, but it's the same principle. So, you can trust the test of the difference between the levels of the moderators. Best, Wolfgang
-----Original Message----- From: Rafael Rios [mailto:biorafaelrm at gmail.com] Sent: Monday, 01 June, 2020 21:54 To: r-sig-meta-analysis at r-project.org; Viechtbauer, Wolfgang (SP) Subject: Overlapping CIs with significant difference among subgroups ATTACHMENT(S) REMOVED: dataset.csv | pruned_super-tree.tre | script.R Dear Wolfgang and All, I conducted a multilevel mixed-effects meta-analysis and found
differences
between levels of two moderators. I was expecting to find non-overlapped confidence intervals. However, I obtained overlapped confidence intervals for all subgroups. How can I interpret these results? In such situation, should I trust in the Q-test or in the CIs? I controlled for phylogenetic non-independence. Is there a chance of this approach affect the
estimation
of CIs using predict function? My dataset and script are attached. Best wishes,
_______________________________________________________ Prof. Dr. Rafael Rios Moura Coordenador de Pesquisa e do NEPEE/CNPq Laborat?rio de Ecologia e Zoologia (LEZ) UEMG - Unidade Ituiutaba ORCID: http://orcid.org/0000-0002-7911-4734 Curr?culo Lattes: http://lattes.cnpq.br/4264357546465157 Research Gate: https://www.researchgate.net/profile/Rafael_Rios_Moura2 Rios de Ci?ncia:
_______________________________________________________
_______________________________________________________ *Prof. Dr. Rafael Rios Moura* Coordenador de Pesquisa e do NEPEE/CNPq Laborat?rio de Ecologia e Zoologia (LEZ) UEMG - Unidade Ituiutaba ORCID: http://orcid.org/0000-0002-7911-4734 Curr?culo Lattes: http://lattes.cnpq.br/4264357546465157 <http://orcid.org/0000-0002-7911-4734> <http://lattes.cnpq.br/4264357546465157> <http://lattes.cnpq.br/4264357546465157>Research Gate: https://www.researchgate.net/profile/Rafael_Rios_Moura2 <http://orcid.org/0000-0002-7911-4734> Rios de Ci?ncia: https://www.youtube.com/channel/UCu2186wIJKji22ai8tvlUfg <http://orcid.org/0000-0002-7911-4734> [[alternative HTML version deleted]]
Dear Rafael, First of all, the information content of standard errors and confidence intervals is identical, they can be transformed into each other. Secondly, to present standard errors in a graph, one would probably show x ? SE(x) instead of x ? 1.96*SE(x). But what would be the advantage? The interpretation of this intercval would mean that the true value is covered by 68% of all such intervals (=1-2*(1-pnorm(1))). I don't think that this is of more interest than a confidence interval. The main aim of a forest plot is interval estimation, not statistically comparing different studies. Best, Gerta Am 04.06.2020 um 08:26 schrieb Rafael Rios:
Dear Dr. Wolfgang, Thank you for the feedback. I was wondering why meta-analysts did not exhibit standard errors instead of confidence intervals in graphs. I can understand the importance of showing that CIs did not include zero, but standard errors can be more informative when comparing subgroups of a moderator. This is just a curiosity. Best wishes, Rafael. Em qua, 3 de jun de 2020 ?s 05:02, Viechtbauer, Wolfgang (SP) < wolfgang.viechtbauer at maastrichtuniversity.nl> escreveu:
Dear Rafael, What specifically do you mean by "this practice"? Presenting estimated (average) effects with their CIs when subgrouping the studies based on some categorical variable? Indeed, one cannot directly infer based on the CIs whether the subgroups are actually different from each other. For this, one should conduct a proper test of subgroup differences. One can also directly test whether the difference between two effects is significant or not or present an estimate of the difference between two effects with a corresponding CI (and if that CI excludes 0, then one knows that the test of the difference is significant at alpha = (100 - CI level)/100). But I see nothing generally wrong with the practice of presenting subgroup effects with CIs. Best, Wolfgang
-----Original Message----- From: Rafael Rios [mailto:biorafaelrm at gmail.com] Sent: Wednesday, 03 June, 2020 5:27 To: Viechtbauer, Wolfgang (SP) Cc: r-sig-meta-analysis at r-project.org Subject: Re: Overlapping CIs with significant difference among subgroups Dear Dr. Wolfgang, Thank you very much! Since confidence intervals are not very informative
to
exhibit diferences between subgroups, why is this practice so common among meta-analysts? Why not to present standard errors instead of CIs? Best wishes, Rafael. Em ter, 2 de jun de 2020 ?s 03:48, Viechtbauer, Wolfgang (SP) <wolfgang.viechtbauer at maastrichtuniversity.nl> escreveu: Dear Rafael, CIs can overlap and yet the difference between the two levels can be significant. See, for example: https://towardsdatascience.com/why-overlapping-confidence-intervals-mean- nothing-about-statistical-significance-48360559900a?gi=b673a691634d https://www.psychologicalscience.org/observer/understanding-confidence- intervals-cis-and-effect-size-estimation https://blog.minitab.com/blog/real-world-quality-improvement/common- statistical-mistakes-you-should-avoid and many more (just google for "test difference overlapping confidence intervals" or something along those lines). They don't talk about meta- analysis per se, but it's the same principle. So, you can trust the test of the difference between the levels of the moderators. Best, Wolfgang
-----Original Message----- From: Rafael Rios [mailto:biorafaelrm at gmail.com] Sent: Monday, 01 June, 2020 21:54 To: r-sig-meta-analysis at r-project.org; Viechtbauer, Wolfgang (SP) Subject: Overlapping CIs with significant difference among subgroups ATTACHMENT(S) REMOVED: dataset.csv | pruned_super-tree.tre | script.R Dear Wolfgang and All, I conducted a multilevel mixed-effects meta-analysis and found
differences
between levels of two moderators. I was expecting to find non-overlapped confidence intervals. However, I obtained overlapped confidence intervals for all subgroups. How can I interpret these results? In such situation, should I trust in the Q-test or in the CIs? I controlled for phylogenetic non-independence. Is there a chance of this approach affect the
estimation
of CIs using predict function? My dataset and script are attached. Best wishes,
_______________________________________________________ Prof. Dr. Rafael Rios Moura Coordenador de Pesquisa e do NEPEE/CNPq Laborat?rio de Ecologia e Zoologia (LEZ) UEMG - Unidade Ituiutaba ORCID: http://orcid.org/0000-0002-7911-4734 Curr?culo Lattes: http://lattes.cnpq.br/4264357546465157 Research Gate: https://www.researchgate.net/profile/Rafael_Rios_Moura2 Rios de Ci?ncia:
_______________________________________________________
Dr. rer. nat. Gerta R?cker, Dipl.-Math. Institute of Medical Biometry and Statistics, Faculty of Medicine and Medical Center - University of Freiburg Stefan-Meier-Str. 26, D-79104 Freiburg, Germany Phone: +49/761/203-6673 Fax: +49/761/203-6680 Mail: ruecker at imbi.uni-freiburg.de Homepage: https://www.uniklinik-freiburg.de/imbi.html
I was going to ask the same thing. I don't see how SEs would be more informative than CIs. But -- if two (independent) estimates have the same precision (i.e., standard error), then one can show that their 83.4% CIs will just touch when the (two-sided) p-value for a Wald-type test of the difference is equal to .05. So, in that case, 83.4% CIs will directly tell you whether the difference is significant or not. Unfortunately, this doesn't work when the standard errors of the estimates are not the same. The larger the difference in SEs, the wider one needs to make the CI to have equivalence between 'non-overlap = significant difference'. Best, Wolfgang
-----Original Message----- From: R-sig-meta-analysis [mailto:r-sig-meta-analysis-bounces at r-project.org] On Behalf Of Gerta Ruecker Sent: Thursday, 04 June, 2020 11:32 To: r-sig-meta-analysis at r-project.org Subject: Re: [R-meta] Overlapping CIs with significant difference among subgroups Dear Rafael, First of all, the information content of standard errors and confidence intervals is identical, they can be transformed into each other. Secondly, to present standard errors in a graph, one would probably show x ? SE(x) instead of x ? 1.96*SE(x). But what would be the advantage? The interpretation of this intercval would mean that the true value is covered by 68% of all such intervals (=1-2*(1-pnorm(1))). I don't think that this is of more interest than a confidence interval. The main aim of a forest plot is interval estimation, not statistically comparing different studies. Best, Gerta Am 04.06.2020 um 08:26 schrieb Rafael Rios:
Dear Dr. Wolfgang, Thank you for the feedback. I was wondering why meta-analysts did not exhibit standard errors instead of confidence intervals in graphs. I can understand the importance of showing that CIs did not include zero, but standard errors can be more informative when comparing subgroups of a moderator. This is just a curiosity. Best wishes, Rafael.
Dear Dr. Gerta and Dr. Wolfgang, I want to highlight that the standard error in a meta-analytical approach is equivalent to the standard deviation of statistics. For statistics, the standard error is the standard deviation divided by the square root of the number of samples, which is used to calculate confidence intervals. In a meta-analysis, the standard error is the square root of the variance. I know this information is clear to you but may not be clear to other readers. Standard errors can provide information about statistical significance, since readers generally interpret the information by analyzing graphs. However, I agree that confidence intervals provide important information. I started this discussion because I was asked by a Referee in a high-impact journal. Thank you for the clarification. It was very helpful. Best wishes, _______________________________________________________ *Prof. Dr. Rafael Rios Moura* Coordenador de Pesquisa e do NEPEE/CNPq Laborat?rio de Ecologia e Zoologia (LEZ) UEMG - Unidade Ituiutaba ORCID: http://orcid.org/0000-0002-7911-4734 Curr?culo Lattes: http://lattes.cnpq.br/4264357546465157 <http://orcid.org/0000-0002-7911-4734> <http://lattes.cnpq.br/4264357546465157> <http://lattes.cnpq.br/4264357546465157>Research Gate: https://www.researchgate.net/profile/Rafael_Rios_Moura2 <http://orcid.org/0000-0002-7911-4734> Rios de Ci?ncia: https://www.youtube.com/channel/UCu2186wIJKji22ai8tvlUfg <http://orcid.org/0000-0002-7911-4734> Em qui., 4 de jun. de 2020 ?s 10:33, Viechtbauer, Wolfgang (SP) < wolfgang.viechtbauer at maastrichtuniversity.nl> escreveu:
I was going to ask the same thing. I don't see how SEs would be more informative than CIs. But -- if two (independent) estimates have the same precision (i.e., standard error), then one can show that their 83.4% CIs will just touch when the (two-sided) p-value for a Wald-type test of the difference is equal to .05. So, in that case, 83.4% CIs will directly tell you whether the difference is significant or not. Unfortunately, this doesn't work when the standard errors of the estimates are not the same. The larger the difference in SEs, the wider one needs to make the CI to have equivalence between 'non-overlap = significant difference'. Best, Wolfgang
-----Original Message----- From: R-sig-meta-analysis [mailto:
r-sig-meta-analysis-bounces at r-project.org]
On Behalf Of Gerta Ruecker Sent: Thursday, 04 June, 2020 11:32 To: r-sig-meta-analysis at r-project.org Subject: Re: [R-meta] Overlapping CIs with significant difference among subgroups Dear Rafael, First of all, the information content of standard errors and confidence intervals is identical, they can be transformed into each other. Secondly, to present standard errors in a graph, one would probably show x ? SE(x) instead of x ? 1.96*SE(x). But what would be the advantage? The interpretation of this intercval would mean that the true value is covered by 68% of all such intervals (=1-2*(1-pnorm(1))). I don't think that this is of more interest than a confidence interval. The main aim of a forest plot is interval estimation, not statistically comparing different studies. Best, Gerta Am 04.06.2020 um 08:26 schrieb Rafael Rios:
Dear Dr. Wolfgang, Thank you for the feedback. I was wondering why meta-analysts did not exhibit standard errors instead of confidence intervals in graphs. I can understand the importance of showing that CIs did not include zero, but standard errors can be more informative when comparing subgroups of a moderator. This is just a curiosity. Best wishes, Rafael.
_______________________________________________ R-sig-meta-analysis mailing list R-sig-meta-analysis at r-project.org https://stat.ethz.ch/mailman/listinfo/r-sig-meta-analysis
Dear Rafael, To avoid any confusion: Yes, the standard error is the square root of the *sampling variance*, which is the variance of a statistic (for example, a mean, or an estimated proportion, or an estimated relative risk, there are many other examples of estimates). Particularly, for a mean the sampling variance is the (estimated) population variance, divided by n. The standard error is the square root of the sampling variance. Both the sampling variance and the standard error describe how precise an estimate is, which is reflected by their (inverse) dependence on the sample size. By contrast, the population variance and its square root (the standard deviation) describe the variability of a measure in a population - which is a very different thing. These are general definitions, there is nothing special with meta-analysis. Best, Gerta Am 04.06.2020 um 16:48 schrieb Rafael Rios:
Dear Dr. Gerta and Dr. Wolfgang, I want to highlight that the standard error in a meta-analytical approach is equivalent to the standard deviation of statistics. For statistics, the standard error is the standard deviation divided by the square root of the number of samples, which is used to calculate confidence intervals. In a meta-analysis, the standard error is the square root of the variance. I know this information is clear to you but may not be clear to other readers. Standard errors can provide information about statistical significance, since readers generally interpret the information by analyzing graphs. However, I agree that confidence intervals provide important information. I started this discussion because I was asked by a Referee in a high-impact journal. Thank you for the clarification. It was very helpful. Best wishes,
_______________________________________________________ *Prof. Dr. Rafael Rios Moura* Coordenador de Pesquisa e do NEPEE/CNPq Laborat?rio de Ecologia e Zoologia (LEZ) UEMG - Unidade Ituiutaba ORCID: http://orcid.org/0000-0002-7911-4734 Curr?culo Lattes: http://lattes.cnpq.br/4264357546465157 <http://orcid.org/0000-0002-7911-4734> <http://lattes.cnpq.br/4264357546465157> <http://lattes.cnpq.br/4264357546465157>Research Gate: https://www.researchgate.net/profile/Rafael_Rios_Moura2 <http://orcid.org/0000-0002-7911-4734> Rios de Ci?ncia: https://www.youtube.com/channel/UCu2186wIJKji22ai8tvlUfg <http://orcid.org/0000-0002-7911-4734> Em qui., 4 de jun. de 2020 ?s 10:33, Viechtbauer, Wolfgang (SP) < wolfgang.viechtbauer at maastrichtuniversity.nl> escreveu: I was going to ask the same thing. I don't see how SEs would be more informative than CIs. But -- if two (independent) estimates have the same precision (i.e., standard error), then one can show that their 83.4% CIs will just touch when the (two-sided) p-value for a Wald-type test of the difference is equal to .05. So, in that case, 83.4% CIs will directly tell you whether the difference is significant or not. Unfortunately, this doesn't work when the standard errors of the estimates are not the same. The larger the difference in SEs, the wider one needs to make the CI to have equivalence between 'non-overlap = significant difference'. Best, Wolfgang -----Original Message----- From: R-sig-meta-analysis [mailto: r-sig-meta-analysis-bounces at r-project.org] On Behalf Of Gerta Ruecker Sent: Thursday, 04 June, 2020 11:32 To: r-sig-meta-analysis at r-project.org Subject: Re: [R-meta] Overlapping CIs with significant difference among subgroups Dear Rafael, First of all, the information content of standard errors and confidence intervals is identical, they can be transformed into each other. Secondly, to present standard errors in a graph, one would probably show x ? SE(x) instead of x ? 1.96*SE(x). But what would be the advantage? The interpretation of this intercval would mean that the true value is covered by 68% of all such intervals (=1-2*(1-pnorm(1))). I don't think that this is of more interest than a confidence interval. The main aim of a forest plot is interval estimation, not statistically comparing different studies. Best, Gerta Am 04.06.2020 um 08:26 schrieb Rafael Rios: Dear Dr. Wolfgang, Thank you for the feedback. I was wondering why meta-analysts did not exhibit standard errors instead of confidence intervals in graphs. I can understand the importance of showing that CIs did not include zero, but standard errors can be more informative when comparing subgroups of a moderator. This is just a curiosity. Best wishes, Rafael. _______________________________________________ R-sig-meta-analysis mailing list R-sig-meta-analysis at r-project.org https://stat.ethz.ch/mailman/listinfo/r-sig-meta-analysis [[alternative HTML version deleted]] _______________________________________________ R-sig-meta-analysis mailing list R-sig-meta-analysis at r-project.org https://stat.ethz.ch/mailman/listinfo/r-sig-meta-analysis
Dr. rer. nat. Gerta R?cker, Dipl.-Math. Institute of Medical Biometry and Statistics, Faculty of Medicine and Medical Center - University of Freiburg Stefan-Meier-Str. 26, D-79104 Freiburg, Germany Phone: +49/761/203-6673 Fax: +49/761/203-6680 Mail: ruecker at imbi.uni-freiburg.de Homepage: https://www.uniklinik-freiburg.de/imbi.html