Hi, When I have set (in script syntax) the layout of a forest plot and run it later again it looks different sometimes. I am not sure whether could be due to workspace or something else. (metafor - R 3.5.0) What can I do? Thank you in advance, Roberto
[R-meta] Changing layout forest plot
5 messages · P. Roberto Bakker, Wolfgang Viechtbauer, Tommy van Steen
Without a reproducible example, it's difficult to say what is causing this. My guess is that some plot settings are getting adjusted somewhere and then you are plotting again with those adjusted settings. Try: graphics.off() before you create each plot so that the default plot settings are used when a new plotting device is opened. Best, Wolfgang
-----Original Message----- From: R-sig-meta-analysis [mailto:r-sig-meta-analysis-bounces at r- project.org] On Behalf Of P. Roberto Bakker Sent: Tuesday, 03 July, 2018 11:19 To: r-sig-meta-analysis at r-project.org Subject: [R-meta] Changing layout forest plot Hi, When I have set (in script syntax) the layout of a forest plot and run it later again it looks different sometimes. I am not sure whether could be due to workspace or something else. (metafor - R 3.5.0) What can I do? Thank you in advance, Roberto
3 days later
Hi all, I?m running a meta-analysis using Cohen?s d in the metafor-package for R. I?m doubting my method/interpretation of results at various stages. As I want to make sure I?m doing it right, rather than doing what is convenient, I hope you could provide me with some advice regarding the following questions: 1. Heterogeneity is high in my data, and I want to add a list of moderators to test their influence. However, many of these moderators have missing values because not all studies have measured these variables. If I run a model that includes all moderators, the number of comparisons drops from 51 to 27. I?d prefer to include all moderators at once, but is this the right thing to do, or should I test each moderator separately? 2. Following 1: if I can run the model as a whole, is it possible and useful to in some way compare the overall effect size of the studies with no missing moderator data with those that are excluded in the model because of these missing datapoints? 3. Some moderators that are significant when including all moderators at once, are not significant when tested individually on the same subset of 27 studies. Which of the two statistics (as part of the larger model, or the individual moderator) should I report? And two questions about interpretation: 4. I added publication year as moderator and and the estimate is 0.0360. Am I interpreting this result correctly when I say that every increase in the moderator year by 1, increases the effect size by 0.0360? 5. I also added a dichotomous moderator with options yes/no. In the moderator list, This moderator is listed with the ?yes? option, with an estimate of 0.5739, does this mean the effect size is 0.5739 higher than when the moderator value is ?no?? Thank you in advance for your thoughts and advice. Best wishes, Tommy
Hi Tommy, 1) This is a tricky (and common) issue. I suspect this is one of the reasons why moderators are still often tested one at a time (to 'maximize' the number of studies included in an analysis when testing each moderator). But this makes it impossible to sort out the unique contributions of correlated moderators, so this isn't ideal. One could consider imputation techniques, although this isn't common practice in the meta-analysis context. So, as for a more pragmatic approach, why not do both? If a moderator is found to be relevant when tested individually and also when other moderators are included, then this gives should give us more confidence in the finding. 2) Possible, sure. Is it useful, maybe. Consider the following scatterplot of the effect sizes against some moderator (ignore the *'s for now): | * .. . | *.. . . | . *. . | . .*. | .. * | * +------*-------- Now suppose all studies where the moderator is below * are missing. This shouldn't bias the slope of the coefficient for the moderator, but studies where the moderator is know will have on average a higher effect size than studies where the moderator is unknown. So what will then the conclusion be once we find this? 3) Again, how about both? Make a side-by-side table of the results. 4) Yes (on average). 5) Yes. If you see a coefficient for "Yes", then "No" is the reference level. So the coefficient for "Yes" tells you how much lower/higher the effect is on average for "Yes" compared to "No". Best, Wolfgang
-----Original Message----- From: R-sig-meta-analysis [mailto:r-sig-meta-analysis-bounces at r- project.org] On Behalf Of Tommy van Steen Sent: Friday, 06 July, 2018 14:37 To: r-sig-meta-analysis at r-project.org Subject: [R-meta] Moderator analysis with missing values (Methods and interpretations) Hi all, I?m running a meta-analysis using Cohen?s d in the metafor-package for R. I?m doubting my method/interpretation of results at various stages. As I want to make sure I?m doing it right, rather than doing what is convenient, I hope you could provide me with some advice regarding the following questions: 1. Heterogeneity is high in my data, and I want to add a list of moderators to test their influence. However, many of these moderators have missing values because not all studies have measured these variables. If I run a model that includes all moderators, the number of comparisons drops from 51 to 27. I?d prefer to include all moderators at once, but is this the right thing to do, or should I test each moderator separately? 2. Following 1: if I can run the model as a whole, is it possible and useful to in some way compare the overall effect size of the studies with no missing moderator data with those that are excluded in the model because of these missing datapoints? 3. Some moderators that are significant when including all moderators at once, are not significant when tested individually on the same subset of 27 studies. Which of the two statistics (as part of the larger model, or the individual moderator) should I report? And two questions about interpretation: 4. I added publication year as moderator and and the estimate is 0.0360. Am I interpreting this result correctly when I say that every increase in the moderator year by 1, increases the effect size by 0.0360? 5. I also added a dichotomous moderator with options yes/no. In the moderator list, This moderator is listed with the ?yes? option, with an estimate of 0.5739, does this mean the effect size is 0.5739 higher than when the moderator value is ?no?? Thank you in advance for your thoughts and advice. Best wishes, Tommy
3 days later
Hi Wolfgang, Thank you for the helpful and clear responses! The visualisation of the moderators in the scatterplot makes a lot of sense, as does the side by side comparison of moderators individually/together. Best wishes, Tommy
On 6 Jul 2018, at 14:11, Viechtbauer, Wolfgang (SP) <wolfgang.viechtbauer at maastrichtuniversity.nl> wrote: Hi Tommy, 1) This is a tricky (and common) issue. I suspect this is one of the reasons why moderators are still often tested one at a time (to 'maximize' the number of studies included in an analysis when testing each moderator). But this makes it impossible to sort out the unique contributions of correlated moderators, so this isn't ideal. One could consider imputation techniques, although this isn't common practice in the meta-analysis context. So, as for a more pragmatic approach, why not do both? If a moderator is found to be relevant when tested individually and also when other moderators are included, then this gives should give us more confidence in the finding. 2) Possible, sure. Is it useful, maybe. Consider the following scatterplot of the effect sizes against some moderator (ignore the *'s for now): | * .. . | *.. . . | . *. . | . .*. | .. * | * +------*-------- Now suppose all studies where the moderator is below * are missing. This shouldn't bias the slope of the coefficient for the moderator, but studies where the moderator is know will have on average a higher effect size than studies where the moderator is unknown. So what will then the conclusion be once we find this? 3) Again, how about both? Make a side-by-side table of the results. 4) Yes (on average). 5) Yes. If you see a coefficient for "Yes", then "No" is the reference level. So the coefficient for "Yes" tells you how much lower/higher the effect is on average for "Yes" compared to "No". Best, Wolfgang
-----Original Message----- From: R-sig-meta-analysis [mailto:r-sig-meta-analysis-bounces at r- project.org] On Behalf Of Tommy van Steen Sent: Friday, 06 July, 2018 14:37 To: r-sig-meta-analysis at r-project.org Subject: [R-meta] Moderator analysis with missing values (Methods and interpretations) Hi all, I?m running a meta-analysis using Cohen?s d in the metafor-package for R. I?m doubting my method/interpretation of results at various stages. As I want to make sure I?m doing it right, rather than doing what is convenient, I hope you could provide me with some advice regarding the following questions: 1. Heterogeneity is high in my data, and I want to add a list of moderators to test their influence. However, many of these moderators have missing values because not all studies have measured these variables. If I run a model that includes all moderators, the number of comparisons drops from 51 to 27. I?d prefer to include all moderators at once, but is this the right thing to do, or should I test each moderator separately? 2. Following 1: if I can run the model as a whole, is it possible and useful to in some way compare the overall effect size of the studies with no missing moderator data with those that are excluded in the model because of these missing datapoints? 3. Some moderators that are significant when including all moderators at once, are not significant when tested individually on the same subset of 27 studies. Which of the two statistics (as part of the larger model, or the individual moderator) should I report? And two questions about interpretation: 4. I added publication year as moderator and and the estimate is 0.0360. Am I interpreting this result correctly when I say that every increase in the moderator year by 1, increases the effect size by 0.0360? 5. I also added a dichotomous moderator with options yes/no. In the moderator list, This moderator is listed with the ?yes? option, with an estimate of 0.5739, does this mean the effect size is 0.5739 higher than when the moderator value is ?no?? Thank you in advance for your thoughts and advice. Best wishes, Tommy