Dear Dr. Wolfgang,
Thank you for the feedback. I was wondering why meta-analysts did not
exhibit standard errors instead of confidence intervals in graphs. I can
understand the importance of showing that CIs did not include zero, but
standard errors can be more informative when comparing subgroups of a
moderator. This is just a curiosity.
Best wishes,
Rafael.
Em qua, 3 de jun de 2020 ?s 05:02, Viechtbauer, Wolfgang (SP) <
wolfgang.viechtbauer at maastrichtuniversity.nl> escreveu:
Dear Rafael,
What specifically do you mean by "this practice"? Presenting estimated
(average) effects with their CIs when subgrouping the studies based on some
categorical variable? Indeed, one cannot directly infer based on the CIs
whether the subgroups are actually different from each other. For this, one
should conduct a proper test of subgroup differences. One can also directly
test whether the difference between two effects is significant or not or
present an estimate of the difference between two effects with a
corresponding CI (and if that CI excludes 0, then one knows that the test
of the difference is significant at alpha = (100 - CI level)/100). But I
see nothing generally wrong with the practice of presenting subgroup
effects with CIs.
Best,
Wolfgang
-----Original Message-----
From: Rafael Rios [mailto:biorafaelrm at gmail.com]
Sent: Wednesday, 03 June, 2020 5:27
To: Viechtbauer, Wolfgang (SP)
Cc: r-sig-meta-analysis at r-project.org
Subject: Re: Overlapping CIs with significant difference among subgroups
Dear Dr. Wolfgang,
Thank you very much! Since confidence intervals are not very informative
exhibit diferences between subgroups, why is this practice so common among
meta-analysts? Why not to present standard errors instead of CIs?
Best wishes,
Rafael.
Em ter, 2 de jun de 2020 ?s 03:48, Viechtbauer, Wolfgang (SP)
<wolfgang.viechtbauer at maastrichtuniversity.nl> escreveu:
Dear Rafael,
CIs can overlap and yet the difference between the two levels can be
significant. See, for example:
https://towardsdatascience.com/why-overlapping-confidence-intervals-mean-
nothing-about-statistical-significance-48360559900a?gi=b673a691634d
https://www.psychologicalscience.org/observer/understanding-confidence-
intervals-cis-and-effect-size-estimation
https://blog.minitab.com/blog/real-world-quality-improvement/common-
statistical-mistakes-you-should-avoid
and many more (just google for "test difference overlapping confidence
intervals" or something along those lines). They don't talk about meta-
analysis per se, but it's the same principle.
So, you can trust the test of the difference between the levels of the
moderators.
Best,
Wolfgang
-----Original Message-----
From: Rafael Rios [mailto:biorafaelrm at gmail.com]
Sent: Monday, 01 June, 2020 21:54
To: r-sig-meta-analysis at r-project.org; Viechtbauer, Wolfgang (SP)
Subject: Overlapping CIs with significant difference among subgroups
ATTACHMENT(S) REMOVED: dataset.csv | pruned_super-tree.tre | script.R
Dear Wolfgang and All,
I conducted a multilevel mixed-effects meta-analysis and found
between levels of two moderators. I was expecting to find non-overlapped
confidence intervals. However, I obtained overlapped confidence intervals
for all subgroups. How can I interpret these results? In such situation,
should I trust in the Q-test or in the CIs? I controlled for phylogenetic
non-independence. Is there a chance of this approach affect the
of CIs using predict function? My dataset and script are attached.
Best wishes,