Hello Meta-analysis Community, I've been using the metagen function in the meta package for a meta-analysis on fungicide efficacy to control a foliar pathogen in cucumbers. I'm using pre-calculated Hedge's G as my effect size and it's standard error. I'm not really a statistician, so I've been using this resource to hold my hand through the process ( https://bookdown.org/MathiasHarrer/Doing_Meta_Analysis_in_R/random.html). I've run into a bit of a rut and I'm having a hard time getting help to interpret my results. I'm dealing with the issue of some of my dataset heterogeneity being nearly 0 (which could just be the case). *Here is an example of my output:* Number of studies combined: k = 288 SMD 95%-CI t p-value Random effects model 0.3309 [ 0.2866; 0.3751] 14.72 < 0.0001 Prediction interval [-0.2216; 0.8834] Quantifying heterogeneity: tau^2 = 0.0783 [<0.0000; <0.0000]; tau = 0.2798 [<0.0000; <0.0000]; I^2 = 0.0% [0.0%; 0.0%]; H = 1.00 [1.00; 1.00] Test of heterogeneity: Q d.f. p-value 165.46 287 1.0000 *Here is the code:* metamkt <- metagen(G, seG, data = mkt, studlab = paste(Study), comb.fixed = FALSE, comb.random = TRUE, method.tau = "SJ", hakn = TRUE, prediction = TRUE, sm = "SMD") My first red flag is of course "I^2 = 0.0%", then that my Q p-value is 1. The interpretation being that the observed heterogeneity is completely random. I have a couple datasets, with the highest I^2 = 17.4%. The reason I find it odd, is that when I do subgroup analysis (even though I'm not supposed to with such low / non-existat heterogeneity), the results make biological sense. My data spans the last decade and the results are also similar with a meta-analysis done in the previous decade on the same topic. This makes me feel like I've made some sort of error at some point in my workflow and I was wondering if you have any diagnostic recommendations for me? One thing that worries me is that my standard errors for my Hedge's G values are so similar since all treatments in each study have 4 replications, but maybe it shouldn't. Best, Sean
[R-meta] metagen / low heterogeneity
6 messages · Sean, Guido Schwarzer, Emerson Del Ponte +1 more
Dear Sean Some comments in-line. It is difficult to read your output because you posted in HTML so I will leave that to people more familiar with the software. Next time it would help to set your mailer to use plain text so your message does not get mangled.
On 11/01/2021 14:56, Sean wrote:
Hello Meta-analysis Community, I've been using the metagen function in the meta package for a meta-analysis on fungicide efficacy to control a foliar pathogen in cucumbers. I'm using pre-calculated Hedge's G as my effect size and it's standard error. I'm not really a statistician, so I've been using this resource to hold my hand through the process ( https://bookdown.org/MathiasHarrer/Doing_Meta_Analysis_in_R/random.html). I've run into a bit of a rut and I'm having a hard time getting help to interpret my results. I'm dealing with the issue of some of my dataset heterogeneity being nearly 0 (which could just be the case). *Here is an example of my output:* Number of studies combined: k = 288 SMD 95%-CI t p-value Random effects model 0.3309 [ 0.2866; 0.3751] 14.72 < 0.0001 Prediction interval [-0.2216; 0.8834]
The fact that your prediction interval is so much wider than the confidence interval does suggest there is heterogeneity here.
Quantifying heterogeneity:
tau^2 = 0.0783 [<0.0000; <0.0000]; tau = 0.2798 [<0.0000; <0.0000];
I^2 = 0.0% [0.0%; 0.0%]; H = 1.00 [1.00; 1.00]
Test of heterogeneity:
Q d.f. p-value
165.46 287 1.0000
*Here is the code:*
metamkt <- metagen(G,
seG,
data = mkt,
studlab = paste(Study),
comb.fixed = FALSE,
comb.random = TRUE,
method.tau = "SJ",
hakn = TRUE,
prediction = TRUE,
sm = "SMD")
My first red flag is of course "I^2 = 0.0%", then that my Q p-value is 1.
The interpretation being that the observed heterogeneity is completely
random. I have a couple datasets, with the highest I^2 = 17.4%. The reason
I find it odd, is that when I do subgroup analysis (even though I'm not
supposed to with such low / non-existat heterogeneity), the results make
biological sense.
No, no, a thousand times no. You use a moderator if there is a scientific hypothesis which justifies it not because of observed heterogeneity. In this case if there is a biological theory behind a moderator then use it. Michael My data spans the last decade and the results are also
similar with a meta-analysis done in the previous decade on the same topic. This makes me feel like I've made some sort of error at some point in my workflow and I was wondering if you have any diagnostic recommendations for me? One thing that worries me is that my standard errors for my Hedge's G values are so similar since all treatments in each study have 4 replications, but maybe it shouldn't. Best, Sean [[alternative HTML version deleted]]
_______________________________________________ R-sig-meta-analysis mailing list R-sig-meta-analysis at r-project.org https://stat.ethz.ch/mailman/listinfo/r-sig-meta-analysis
I apologize for the formatting. Here is the ouput and code again
below. I think this should be more readable now that I've selected
plain text.
Michael, well that is good news. If I did have high heterogeneity and
hadn't planned to use a moderator, does that just mean I should
consider looking for one? Whereas in my case, I knew what I was
interested in, so my heterogeneity does not need to be considered as a
prerequisite?
Here is an example of my output:
Number of studies combined: k = 288
SMD 95%-CI t
p-value
Random effects model 0.3309 [ 0.2866; 0.3751] 14.72 < 0.0001
Prediction interval [-0.2216; 0.8834]
Quantifying heterogeneity:
tau^2 = 0.0783 [<0.0000; <0.0000]; tau = 0.2798 [<0.0000; <0.0000];
I^2 = 0.0% [0.0%; 0.0%]; H = 1.00 [1.00; 1.00]
Test of heterogeneity:
Q d.f. p-value
165.46 287 1.0000
Here is the code:
metamkt <- metagen(G,
seG,
data = mkt,
studlab = paste(Study),
comb.fixed = FALSE,
comb.random = TRUE,
method.tau = "SJ",
hakn = TRUE,
prediction = TRUE,
sm = "SMD")
Sean
On Mon, Jan 11, 2021 at 11:11 AM Michael Dewey <lists at dewey.myzen.co.uk> wrote:
Dear Sean Some comments in-line. It is difficult to read your output because you posted in HTML so I will leave that to people more familiar with the software. Next time it would help to set your mailer to use plain text so your message does not get mangled. On 11/01/2021 14:56, Sean wrote:
Hello Meta-analysis Community, I've been using the metagen function in the meta package for a meta-analysis on fungicide efficacy to control a foliar pathogen in cucumbers. I'm using pre-calculated Hedge's G as my effect size and it's standard error. I'm not really a statistician, so I've been using this resource to hold my hand through the process ( https://bookdown.org/MathiasHarrer/Doing_Meta_Analysis_in_R/random.html). I've run into a bit of a rut and I'm having a hard time getting help to interpret my results. I'm dealing with the issue of some of my dataset heterogeneity being nearly 0 (which could just be the case). *Here is an example of my output:* Number of studies combined: k = 288 SMD 95%-CI t p-value Random effects model 0.3309 [ 0.2866; 0.3751] 14.72 < 0.0001 Prediction interval [-0.2216; 0.8834]
The fact that your prediction interval is so much wider than the confidence interval does suggest there is heterogeneity here.
Quantifying heterogeneity:
tau^2 = 0.0783 [<0.0000; <0.0000]; tau = 0.2798 [<0.0000; <0.0000];
I^2 = 0.0% [0.0%; 0.0%]; H = 1.00 [1.00; 1.00]
Test of heterogeneity:
Q d.f. p-value
165.46 287 1.0000
*Here is the code:*
metamkt <- metagen(G,
seG,
data = mkt,
studlab = paste(Study),
comb.fixed = FALSE,
comb.random = TRUE,
method.tau = "SJ",
hakn = TRUE,
prediction = TRUE,
sm = "SMD")
My first red flag is of course "I^2 = 0.0%", then that my Q p-value is 1.
The interpretation being that the observed heterogeneity is completely
random. I have a couple datasets, with the highest I^2 = 17.4%. The reason
I find it odd, is that when I do subgroup analysis (even though I'm not
supposed to with such low / non-existat heterogeneity), the results make
biological sense.
No, no, a thousand times no. You use a moderator if there is a scientific hypothesis which justifies it not because of observed heterogeneity. In this case if there is a biological theory behind a moderator then use it. Michael My data spans the last decade and the results are also
similar with a meta-analysis done in the previous decade on the same topic.
This makes me feel like I've made some sort of error at some point in my
workflow and I was wondering if you have any diagnostic recommendations for
me? One thing that worries me is that my standard errors for my Hedge's G
values are so similar since all treatments in each study have 4
replications, but maybe it shouldn't.
Best,
Sean
[[alternative HTML version deleted]]
_______________________________________________ R-sig-meta-analysis mailing list R-sig-meta-analysis at r-project.org https://stat.ethz.ch/mailman/listinfo/r-sig-meta-analysis
-- Michael http://www.dewey.myzen.co.uk/home.html
Sean, The confidence intervals for tau2 and tau are certainly wrong / meaningless. I could have a closer look if you send me (part of) your data set with these strange CIs. Best wishes Guido P.S. Internally, meta calls rma.uni() and confint() from R package metafor to calculate the confidence intervals for tau2 and tau. I do not assume that there is a problem in the computations in metafor.
Dear Sean, I have been dealing with this kind of data and context (fungicide effect on plant disease), and I think I know which is the previous paper you are basing your analyses. Sorry for not replying to your specific questions, but it seems there are primary aspects to look at before specificities of the MA outcome. I would recommend that you give a look at a number of works that followed in the last decade (in case you didn't do it). I am quite sure that you are treating several treatments from the same experiment as independent given your high K - all treatments from the same trial are compared to a common control. A network MA should be interesting to test. I've used the arm-based approach in metafor (Wolfgang's help) and the contrast-based in netmeta (Gerta's help). Nothing wrong with Hedges G, but I would argue that log ratio is a more directly interpretable effect size in our area - You really want to know (as everybody in our field) the percent reduction in disease due to fungicide use relative to the untreated check. The absolute or standardized difference can be complicated if your control varies considerably among the trials. Also, the criteria to classify and interpret Hedges G are not well established in our field. If you want to see examples specific for this kind of situation, I have several codes on my github (check the link to the html report for each of these): https://github.com/emdelponte/paper-FHB-mixtures-meta-analysis https://github.com/emdelponte/paper-fungicides-whitemold https://github.com/emdelponte/paper-FHB-Brazil-meta-analysis Hope this helps! Emerson Em seg., 11 de jan. de 2021 ?s 11:57, Sean <sean.toporek at gmail.com> escreveu:
Hello Meta-analysis Community, I've been using the metagen function in the meta package for a meta-analysis on fungicide efficacy to control a foliar pathogen in cucumbers. I'm using pre-calculated Hedge's G as my effect size and it's standard error. I'm not really a statistician, so I've been using this resource to hold my hand through the process ( https://bookdown.org/MathiasHarrer/Doing_Meta_Analysis_in_R/random.html). I've run into a bit of a rut and I'm having a hard time getting help to interpret my results. I'm dealing with the issue of some of my dataset heterogeneity being nearly 0 (which could just be the case). *Here is an example of my output:* Number of studies combined: k = 288 SMD 95%-CI t p-value Random effects model 0.3309 [ 0.2866; 0.3751] 14.72 < 0.0001 Prediction interval [-0.2216; 0.8834] Quantifying heterogeneity: tau^2 = 0.0783 [<0.0000; <0.0000]; tau = 0.2798 [<0.0000; <0.0000]; I^2 = 0.0% [0.0%; 0.0%]; H = 1.00 [1.00; 1.00] Test of heterogeneity: Q d.f. p-value 165.46 287 1.0000 *Here is the code:* metamkt <- metagen(G, seG, data = mkt, studlab = paste(Study), comb.fixed = FALSE, comb.random = TRUE, method.tau = "SJ", hakn = TRUE, prediction = TRUE, sm = "SMD") My first red flag is of course "I^2 = 0.0%", then that my Q p-value is 1. The interpretation being that the observed heterogeneity is completely random. I have a couple datasets, with the highest I^2 = 17.4%. The reason I find it odd, is that when I do subgroup analysis (even though I'm not supposed to with such low / non-existat heterogeneity), the results make biological sense. My data spans the last decade and the results are also similar with a meta-analysis done in the previous decade on the same topic. This makes me feel like I've made some sort of error at some point in my workflow and I was wondering if you have any diagnostic recommendations for me? One thing that worries me is that my standard errors for my Hedge's G values are so similar since all treatments in each study have 4 replications, but maybe it shouldn't. Best, Sean [[alternative HTML version deleted]]
_______________________________________________ R-sig-meta-analysis mailing list R-sig-meta-analysis at r-project.org https://stat.ethz.ch/mailman/listinfo/r-sig-meta-analysis
*Emerson M. Del Ponte* Universidade Federal de Vi?osa, Brazil Chair of the Graduate Studies <http://www.dfp.ufv.br/graduate/> in Plant Pathology EIC for Tropical Plant Pathology <http://sbfitopatologia.org.br/tpp/> Co-Founder of Open Plant Pathology <https://www.openplantpathology.org/> My websites: Twitter <https://twitter.com/edelponte> | GitHub <https://github.com/emdelponte> | Google Scholar <https://scholar.google.com.br/citations?user=a1rPnI0AAAAJ> | ResearchGate <https://www.researchgate.net/profile/Emerson_Del_Ponte> Tel +55 31 36124830 [[alternative HTML version deleted]]
On 11/01/2021 16:45, Sean wrote:
I apologize for the formatting. Here is the ouput and code again below. I think this should be more readable now that I've selected plain text. Michael, well that is good news. If I did have high heterogeneity and hadn't planned to use a moderator, does that just mean I should consider looking for one? Whereas in my case, I knew what I was interested in, so my heterogeneity does not need to be considered as a prerequisite?
The crucial thing is the scientific context. I do not work in the same area as you so my examples are from my field, not yours, but I hope are helpful. If the primary studies were all very similar then you would not expect heterogeneity and you might be prompted to look for explanations for even mild amounts. For instance if all the primary studies had studied the same dose of drug in people with very tightly defined illness in countries with very similar health care symptoms then any heterogeneity might lead you, post hoc, to find out why. If on the contrary the studies had examined a complex health care systems intervention in countries across the globe in patients who might vary considerably then you would be very surprised not to see heterogeneity. In that case you would be less inclined to look for explanations. If you had a theory that outcomes were related to some other variable then you might use that as a moderator irrespective of the amount of heterogeneity. For instance in a study of a skills-based therapy you might have a theory that outcomes are different now from what they used to be so you would find it worth while looking at that whatever. For instance is centres in each study have been doing a particular operation for different amounts of time do the ones who have been doing it for longest have have better or worse outcomes. Michael
Here is an example of my output:
Number of studies combined: k = 288
SMD 95%-CI t
p-value
Random effects model 0.3309 [ 0.2866; 0.3751] 14.72 < 0.0001
Prediction interval [-0.2216; 0.8834]
Quantifying heterogeneity:
tau^2 = 0.0783 [<0.0000; <0.0000]; tau = 0.2798 [<0.0000; <0.0000];
I^2 = 0.0% [0.0%; 0.0%]; H = 1.00 [1.00; 1.00]
Test of heterogeneity:
Q d.f. p-value
165.46 287 1.0000
Here is the code:
metamkt <- metagen(G,
seG,
data = mkt,
studlab = paste(Study),
comb.fixed = FALSE,
comb.random = TRUE,
method.tau = "SJ",
hakn = TRUE,
prediction = TRUE,
sm = "SMD")
Sean
On Mon, Jan 11, 2021 at 11:11 AM Michael Dewey <lists at dewey.myzen.co.uk> wrote:
Dear Sean Some comments in-line. It is difficult to read your output because you posted in HTML so I will leave that to people more familiar with the software. Next time it would help to set your mailer to use plain text so your message does not get mangled. On 11/01/2021 14:56, Sean wrote:
Hello Meta-analysis Community, I've been using the metagen function in the meta package for a meta-analysis on fungicide efficacy to control a foliar pathogen in cucumbers. I'm using pre-calculated Hedge's G as my effect size and it's standard error. I'm not really a statistician, so I've been using this resource to hold my hand through the process ( https://bookdown.org/MathiasHarrer/Doing_Meta_Analysis_in_R/random.html). I've run into a bit of a rut and I'm having a hard time getting help to interpret my results. I'm dealing with the issue of some of my dataset heterogeneity being nearly 0 (which could just be the case). *Here is an example of my output:* Number of studies combined: k = 288 SMD 95%-CI t p-value Random effects model 0.3309 [ 0.2866; 0.3751] 14.72 < 0.0001 Prediction interval [-0.2216; 0.8834]
The fact that your prediction interval is so much wider than the confidence interval does suggest there is heterogeneity here.
Quantifying heterogeneity:
tau^2 = 0.0783 [<0.0000; <0.0000]; tau = 0.2798 [<0.0000; <0.0000];
I^2 = 0.0% [0.0%; 0.0%]; H = 1.00 [1.00; 1.00]
Test of heterogeneity:
Q d.f. p-value
165.46 287 1.0000
*Here is the code:*
metamkt <- metagen(G,
seG,
data = mkt,
studlab = paste(Study),
comb.fixed = FALSE,
comb.random = TRUE,
method.tau = "SJ",
hakn = TRUE,
prediction = TRUE,
sm = "SMD")
My first red flag is of course "I^2 = 0.0%", then that my Q p-value is 1.
The interpretation being that the observed heterogeneity is completely
random. I have a couple datasets, with the highest I^2 = 17.4%. The reason
I find it odd, is that when I do subgroup analysis (even though I'm not
supposed to with such low / non-existat heterogeneity), the results make
biological sense.
No, no, a thousand times no. You use a moderator if there is a scientific hypothesis which justifies it not because of observed heterogeneity. In this case if there is a biological theory behind a moderator then use it. Michael My data spans the last decade and the results are also
similar with a meta-analysis done in the previous decade on the same topic.
This makes me feel like I've made some sort of error at some point in my
workflow and I was wondering if you have any diagnostic recommendations for
me? One thing that worries me is that my standard errors for my Hedge's G
values are so similar since all treatments in each study have 4
replications, but maybe it shouldn't.
Best,
Sean
[[alternative HTML version deleted]]
_______________________________________________ R-sig-meta-analysis mailing list R-sig-meta-analysis at r-project.org https://stat.ethz.ch/mailman/listinfo/r-sig-meta-analysis
-- Michael http://www.dewey.myzen.co.uk/home.html