Haha, sorry, I? was editing a response that included your signature
and forgot to exclude your signature :-)
nelson
On Thu, 27 Aug 2020 at 18:47, ne gic <negic4 at gmail.com
<mailto:negic4 at gmail.com>> wrote:
Wait, are you also Nelly?@Nelson?
On Thu, Aug 27, 2020 at 6:44 PM Nelson Ndegwa
<nelson.ndegwa at gmail.com <mailto:nelson.ndegwa at gmail.com>> wrote:
Dear Gerta,
I agree with you. In the interest of playing the devil's
advocate - and my (and some list members) learning more, what
would your opinion be if the CI of the 2 studies did not overlap?
Appreciate your response.
Sincerely,
nelly
On Thu, 27 Aug 2020 at 18:21, Gerta Ruecker
<ruecker at imbi.uni-freiburg.de
<mailto:ruecker at imbi.uni-freiburg.de>> wrote:
Dear Nelly and all,
With respect to (only) the first question (sample size):
I think nothing is wrong, at least in principle, with a
meta-analysis of
two studies. We analyze single studies, so why not
combining two of
them? They may even include hundreds of patients.
Of course, it is impossible to obtain a decent estimate of
the
between-study variance/heterogeneity from two or three
studies. But if
the confidence intervals are overlapping, I don't see any
reason to
mistrust the pooled effect estimate.
Best,
Gerta
Am 27.08.2020 um 16:07 schrieb ne gic:
> Many thanks for the insights Wolfgang.
>
> Apologies for my imprecise questions. By "agreed upon" &
> conclusions/interpretations", I was thinking if there is
> size whose pooled estimate can be considered somewhat
> robust inferences e.g. inferences drawn from just 2
> drastically changed by the publication of a third study
> it seems like there isn't. But I guess readers have to
> themselves to access how much weight they can place on
> specific meta-analyses.
>
> Again, I appreciate it!
>
> Sincerely,
> nelly
>
> On Thu, Aug 27, 2020 at 3:43 PM Viechtbauer, Wolfgang (SP) <
> wolfgang.viechtbauer at maastrichtuniversity.nl
<mailto:wolfgang.viechtbauer at maastrichtuniversity.nl>> wrote:
>> Dear nelly,
>>
>> See my responses below.
>>
>>> -----Original Message-----
>>> From: R-sig-meta-analysis [mailto:
>> r-sig-meta-analysis-bounces at r-project.org
<mailto:r-sig-meta-analysis-bounces at r-project.org>]
>>> On Behalf Of ne gic
>>> Sent: Wednesday, 26 August, 2020 10:16
>>> To: r-sig-meta-analysis at r-project.org
<mailto:r-sig-meta-analysis at r-project.org>
>>> Subject: [R-meta] Sample size and continuity correction
>>>
>>> Dear List,
>>>
>>> I have general meta-analysis questions that are not
>>> platform/software related.
>>>
>>> *=======================*
>>> *1. Issue of few included studies *
>>> * =======================*
>>> It seems common to see published meta-analyses with
>>>
>>> (A). An analysis of only 2 studies.
>>> (B). In another, subgroup analyses ending up with only
>>> the subgroups.
>>>
>>> Nevertheless, they still end up providing a pooled
>>> respective forest plots.
>>>
>>> So my question is, is there an agreed upon (or rule of
>>> view) minimum number of studies below which
>> Agreed upon? Not that I am aware of. Some may want at
>> group or overall), some 10, others may be fine with if
>> contains 1 or 2 studies.
>>
>>> What interpretations/conclusions can one really draw
>> That's a vague question, so I can't really answer this
>> course, estimates will be imprecise when k is small
>>> *===================*
>>> *2. Continuity correction *
>>> * ===================*
>>>
>>> In studies of rare events, zero events tend to occur
>>> add a small value so that the zero is taken care of
>>>
>>> If for instance, the inclusion of this small value via
>>> correction leads to differing results e.g. from
>>> when not using correction, to significant results when
>>> make of that? Can we trust such results?
>> If this happens, then the p-value is probably
fluctuating around 0.05 (or
>> whatever cutoff is used for declaring results as
>> difference between p=.06 and p=.04 is (very very
>> significant (Gelman & Stern, 2006). Or, to use the
>> Rosenthal (1989): "[...] surely, God loves the .06
>> .05".
>>
>> Gelman, A., & Stern, H. (2006). The difference between
>> "not significant" is not itself statistically
>> Statistician, 60(4), 328-331.
>>
>> Rosnow, R.L. & Rosenthal, R. (1989). Statistical
>> justification of knowledge in psychological science.
>>> If one instead opts to calculate a risk difference
>>> for significance, would this be a better solution
>>> to the continuity correction problem above?
>> If one is worried about the use of 'continuity
corrections', then I think
>> the more appropriate reaction is to use 'exact
likelihood' methods (such as
>> using (mixed-effects) logistic regression models or
>> instead of switching to risk differences (nothing wrong
>> but risk differences are really a fudamentally
>> measure compared to risk/odds ratios).
>>
>>> Looking forward to hearing your views as diverse as
>>> where there is no consensus.
>>>
>>> Sincerely,
>>> nelly
>? ? ? ?[[alternative HTML version deleted]]
>
> _______________________________________________
> R-sig-meta-analysis mailing list
> R-sig-meta-analysis at r-project.org
<mailto:R-sig-meta-analysis at r-project.org>
--
Dr. rer. nat. Gerta R?cker, Dipl.-Math.
Institute of Medical Biometry and Statistics,
Faculty of Medicine and Medical Center - University of
Freiburg
Stefan-Meier-Str. 26, D-79104 Freiburg, Germany
Phone:? ? +49/761/203-6673
Fax:? ? ? +49/761/203-6680
Mail: ruecker at imbi.uni-freiburg.de
<mailto:ruecker at imbi.uni-freiburg.de>
Homepage:
https://www.uniklinik-freiburg.de/imbi-en/employees.html?imbiuser=ruecker