Hi, all, I am conducting a multivariate meta-analysis using rmv in metaphor package. I wonder how rmv calculate weights for each effect sizes? I wonder if studies with more effect sizes get more total weights? I read an article saying ?The robust variance estimation methods upweight effect sizes that are estimated with greater precision (due to differences in sample sizes, level of randomization, predictive power of covariates, etc.) and downweight estimates from studies that contribute multiple effect size estimates". (Kraft,Blazar, Hogan, 2018). Is that right? I am using rmv in metafor package to estimate the model and use coef_test in sandwich package to do significance test. Both give the same pooled effect sizes though. I understand that weights also impact pooled effect size estimate. In this case, how will robust variance estimation impact my weight mean effect size? Thanks Best wishes Huang Sent from Mail<https://go.microsoft.com/fwlink/?LinkId=550986> for Windows 10
[R-meta] weight in rmv metafor
15 messages · Huang Wu, James Pustejovsky, Norman DAURELLE +2 more
Dear Huang, The weighting in rma.mv() models is more complex than in 'simple' models fitted with rma() (same as rma.uni()). Depending on the particular model you are fitting with rma.mv(), the model-implied marginal var-cov matrix of the estimates (which you can see with vcov(<model>, type="obs")) is not just a diagonal matrix (as is the case for rma() models), but also involves covariances. The inverse of this matrix is the weight matrix, which is then also not just a diagonal matrix. For example, when some studies contribute multiple estimates, we might consider fitting a multilevel/multivariate model with random effects for studies and random effects for estimates within studies. When the estimated between-study variance component is greater than zero, then this implies a certain amount of covariance for effects from the same study. This leads to negative off-diagonal elements in the weight matrix for estimates from the same study. As a result, if the ith study contributes k_i estimates, it is not treated as if there were k_i independent studies. This has been discussed in the past on this mailing list, so you might want to search the archives for some relevant posts. Googling for: site:https://stat.ethz.ch/pipermail/r-sig-meta-analysis/ rma.mv weights brings up some relevant posts. Roughly speaking, the robust variance estimation method works as follows. We start with a 'working model' that is hopefully some decent approximation to the true model and that also captures the dependencies in the estimates. This model provides us with the estimates of the fixed effects. However, because we might not be able to capture all dependencies correctly with this working model, the var-cov matrix of the estimated fixed effects might not be correct. Hence, based on the working model, we can use the robust variance estimation method to obtain a var-cov matrix that is (asymptotically) correct and use this for testing the fixed effects. Therefore, the robust variance estimation method does not actually lead to changes in the estimated fixed effects. Those are determined based on the working model. That is why coef_test() will give you the exact same estimates of the fixed effects as those from the working model you use as input to this function. That is why it is important to use a working model that is at least some decent approximation. While the fixed effects estimates might even be unbiased when using a really poor working model, the estimates will not be very efficient. Best, Wolfgang
-----Original Message----- From: R-sig-meta-analysis [mailto:r-sig-meta-analysis-bounces at r-project.org] On Behalf Of Huang Wu Sent: Sunday, 07 June, 2020 0:37 To: r-sig-meta-analysis at r-project.org Subject: [R-meta] weight in rmv metafor Hi, all, I am conducting a multivariate meta-analysis using rmv in metaphor package. I wonder how rmv calculate weights for each effect sizes? I wonder if studies with more effect sizes get more total weights? I read an article saying "The robust variance estimation methods upweight effect sizes that are estimated with greater precision (due to differences in sample sizes, level of randomization, predictive power of covariates, etc.) and downweight estimates from studies that contribute multiple effect size estimates". (Kraft,Blazar, Hogan, 2018). Is that right? I am using rmv in metafor package to estimate the model and use coef_test in sandwich package to do significance test. Both give the same pooled effect sizes though. I understand that weights also impact pooled effect size estimate. In this case, how will robust variance estimation impact my weight mean effect size? Thanks Best wishes Huang
Dear Dr. Viechtbauer, Thank you very much for your helpful reply. To be clear, I wonder if the multivariate approach will downweight estimates from a study that contains multiple effect sizes? I saw in a previous posts (https://stat.ethz.ch/pipermail/r-help/2017-February/444703.html), your said, ?if you fit an appropriate model to the data at hand, the 'default weights' used by rma.mv() will be just fine.? Does that mean that weights in rma.mv model would not impact the estimated fixed effects? I found that in the forest plot I generate through forest(), studies with multiple effect sizes tend to have bigger weights. I also used weights() to check the weights given to each effect sizes and found the same thing (see below for my code). I wonder if the weights for each effect sizes presented in forest plot is correct? Thank you very much again for your help. Best wishes Huang Vt <- impute_covariance_matrix(vi = try$v, #known correlation vector cluster = try$ID, #study ID r = 0.80) #assumed correlation Mt <- rma.mv(yi=d, #effect size V = Vt, #variance (tHIS IS WHAt CHANGES FROM HEmodel) random = ~1 | ID/IID, #nesting structure test= "t", #use t-tests data=try, method="REML") weights(Mt) Sent from Mail<https://go.microsoft.com/fwlink/?LinkId=550986> for Windows 10 From: Viechtbauer, Wolfgang (SP)<mailto:wolfgang.viechtbauer at maastrichtuniversity.nl> Sent: Sunday, June 7, 2020 6:56 AM To: Huang Wu<mailto:huang.wu at wmich.edu>; r-sig-meta-analysis at r-project.org<mailto:r-sig-meta-analysis at r-project.org> Subject: RE: [R-meta] weight in rmv metafor Dear Huang, The weighting in rma.mv() models is more complex than in 'simple' models fitted with rma() (same as rma.uni()). Depending on the particular model you are fitting with rma.mv(), the model-implied marginal var-cov matrix of the estimates (which you can see with vcov(<model>, type="obs")) is not just a diagonal matrix (as is the case for rma() models), but also involves covariances. The inverse of this matrix is the weight matrix, which is then also not just a diagonal matrix. For example, when some studies contribute multiple estimates, we might consider fitting a multilevel/multivariate model with random effects for studies and random effects for estimates within studies. When the estimated between-study variance component is greater than zero, then this implies a certain amount of covariance for effects from the same study. This leads to negative off-diagonal elements in the weight matrix for estimates from the same study. As a result, if the ith study contributes k_i estimates, it is not treated as if there were k_i independent studies. This has been discussed in the past on this mailing list, so you might want to search the archives for some relevant posts. Googling for: site:https://stat.ethz.ch/pipermail/r-sig-meta-analysis/ rma.mv weights brings up some relevant posts. Roughly speaking, the robust variance estimation method works as follows. We start with a 'working model' that is hopefully some decent approximation to the true model and that also captures the dependencies in the estimates. This model provides us with the estimates of the fixed effects. However, because we might not be able to capture all dependencies correctly with this working model, the var-cov matrix of the estimated fixed effects might not be correct. Hence, based on the working model, we can use the robust variance estimation method to obtain a var-cov matrix that is (asymptotically) correct and use this for testing the fixed effects. Therefore, the robust variance estimation method does not actually lead to changes in the estimated fixed effects. Those are determined based on the working model. That is why coef_test() will give you the exact same estimates of the fixed effects as those from the working model you use as input to this function. That is why it is important to use a working model that is at least some decent approximation. While the fixed effects estimates might even be unbiased when using a really poor working model, the estimates will not be very efficient. Best, Wolfgang
-----Original Message----- From: R-sig-meta-analysis [mailto:r-sig-meta-analysis-bounces at r-project.org] On Behalf Of Huang Wu Sent: Sunday, 07 June, 2020 0:37 To: r-sig-meta-analysis at r-project.org Subject: [R-meta] weight in rmv metafor Hi, all, I am conducting a multivariate meta-analysis using rmv in metaphor package. I wonder how rmv calculate weights for each effect sizes? I wonder if studies with more effect sizes get more total weights? I read an article saying "The robust variance estimation methods upweight effect sizes that are estimated with greater precision (due to differences in sample sizes, level of randomization, predictive power of covariates, etc.) and downweight estimates from studies that contribute multiple effect size estimates". (Kraft,Blazar, Hogan, 2018). Is that right? I am using rmv in metafor package to estimate the model and use coef_test in sandwich package to do significance test. Both give the same pooled effect sizes though. I understand that weights also impact pooled effect size estimate. In this case, how will robust variance estimation impact my weight mean effect size? Thanks Best wishes Huang
Of course the weights "impact the estimated fixed effects". But whether studies with multiple effect sizes tend to receive more weight depends on various factors, including the variances of the random effects and the sampling error (co)variances. A more detailed discussion around the way weighting works in rma.mv models can be found here: http://www.metafor-project.org/doku.php/tips:weights_in_rma.mv_models Note that weights(res, type="rowsum") currently only works in the 'devel' version of metafor, so follow https://wviechtb.github.io/metafor/#installation if you want to reproduce this part as well. I hope this clarifies things. Best, Wolfgang
-----Original Message----- From: Huang Wu [mailto:huang.wu at wmich.edu] Sent: Sunday, 07 June, 2020 19:52 To: Viechtbauer, Wolfgang (SP); r-sig-meta-analysis at r-project.org Subject: RE: [R-meta] weight in rmv metafor Dear Dr. Viechtbauer, Thank you very much for your helpful reply. To be clear, I wonder if the multivariate approach will downweight estimates from a study that contains multiple effect sizes? I saw in a previous posts (https://stat.ethz.ch/pipermail/r-help/2017- February/444703.html), your said, "if you fit an appropriate model to the data at hand, the 'default weights' used by rma.mv() will be just fine." Does that mean that weights in rma.mv model would not impact the estimated fixed effects? I found that in the forest plot I generate through forest(), studies with multiple effect sizes tend to have bigger weights. I also used weights() to check the weights given to each effect sizes and found the same thing (see below for my code). I wonder if the weights for each effect sizes presented in forest plot is correct? Thank you very much again for your help. Best wishes Huang Vt <- impute_covariance_matrix(vi = try$v,? #known correlation vector ?????????????????????????????????? cluster = try$ID, #study ID ????????????????????????????????? r = 0.80) #assumed correlation Mt <- rma.mv(yi=d, #effect size ?????????????????? V = Vt, #variance (tHIS IS WHAt CHANGES FROM HEmodel) ?????????? ????????random = ~1 | ID/IID, #nesting structure ?????????????????? test= "t", #use t-tests ?????????????????? data=try, ???????????????????method="REML") weights(Mt) From: Viechtbauer, Wolfgang (SP) Sent: Sunday, June 7, 2020 6:56 AM To: Huang Wu; r-sig-meta-analysis at r-project.org Subject: RE: [R-meta] weight in rmv metafor Dear Huang, The weighting in rma.mv() models is more complex than in 'simple' models fitted with rma() (same as rma.uni()). Depending on the particular model you are fitting with rma.mv(), the model-implied marginal var-cov matrix of the estimates (which you can see with vcov(<model>, type="obs")) is not just a diagonal matrix (as is the case for rma() models), but also involves covariances. The inverse of this matrix is the weight matrix, which is then also not just a diagonal matrix. For example, when some studies contribute multiple estimates, we might consider fitting a multilevel/multivariate model with random effects for studies and random effects for estimates within studies. When the estimated between-study variance component is greater than zero, then this implies a certain amount of covariance for effects from the same study. This leads to negative off-diagonal elements in the weight matrix for estimates from the same study. As a result, if the ith study contributes k_i estimates, it is not treated as if there were k_i independent studies. This has been discussed in the past on this mailing list, so you might want to search the archives for some relevant posts. Googling for: site:https://stat.ethz.ch/pipermail/r-sig-meta-analysis/ rma.mv weights brings up some relevant posts. Roughly speaking, the robust variance estimation method works as follows. We start with a 'working model' that is hopefully some decent approximation to the true model and that also captures the dependencies in the estimates. This model provides us with the estimates of the fixed effects. However, because we might not be able to capture all dependencies correctly with this working model, the var-cov matrix of the estimated fixed effects might not be correct. Hence, based on the working model, we can use the robust variance estimation method to obtain a var-cov matrix that is (asymptotically) correct and use this for testing the fixed effects. Therefore, the robust variance estimation method does not actually lead to changes in the estimated fixed effects. Those are determined based on the working model. That is why coef_test() will give you the exact same estimates of the fixed effects as those from the working model you use as input to this function. That is why it is important to use a working model that is at least some decent approximation. While the fixed effects estimates might even be unbiased when using a really poor working model, the estimates will not be very efficient. Best, Wolfgang
-----Original Message----- From: R-sig-meta-analysis [mailto:r-sig-meta-analysis-bounces at r-
project.org]
On Behalf Of Huang Wu Sent: Sunday, 07 June, 2020 0:37 To: r-sig-meta-analysis at r-project.org Subject: [R-meta] weight in rmv metafor Hi, all, I am conducting a multivariate meta-analysis using rmv in metaphor package. I wonder how rmv calculate weights for each effect sizes? I wonder if studies with more effect sizes get more total weights? I read an article saying "The robust variance estimation methods upweight effect sizes that are estimated with greater precision (due to differences in sample sizes, level of randomization, predictive power of covariates, etc.) and downweight estimates from studies that contribute multiple effect size estimates". (Kraft,Blazar, Hogan, 2018). Is that right? I am using rmv in metafor package to estimate the model and use coef_test
in
sandwich package to do significance test. Both give the same pooled effect sizes though. I understand that weights also impact pooled effect size estimate. In this case, how will robust variance estimation impact my
weight
mean effect size? Thanks Best wishes Huang
1 day later
Hi Huang, I've written up some notes that add a bit of further intuition to the discussion that Wolfgang provided. The main case that I focus on is a model that is just a meta-analysis (i.e., no predictors) and that includes random effects to capture both between-study and within-study heterogeneity. I also say a little bit about meta-regression models with only study-level predictors. https://www.jepusto.com/weighting-in-multivariate-meta-analysis/ Best, James On Sun, Jun 7, 2020 at 4:11 PM Viechtbauer, Wolfgang (SP) <
wolfgang.viechtbauer at maastrichtuniversity.nl> wrote:
Of course the weights "impact the estimated fixed effects". But whether studies with multiple effect sizes tend to receive more weight depends on various factors, including the variances of the random effects and the sampling error (co)variances. A more detailed discussion around the way weighting works in rma.mv models can be found here: http://www.metafor-project.org/doku.php/tips:weights_in_rma.mv_models Note that weights(res, type="rowsum") currently only works in the 'devel' version of metafor, so follow https://wviechtb.github.io/metafor/#installation if you want to reproduce this part as well. I hope this clarifies things. Best, Wolfgang
-----Original Message----- From: Huang Wu [mailto:huang.wu at wmich.edu] Sent: Sunday, 07 June, 2020 19:52 To: Viechtbauer, Wolfgang (SP); r-sig-meta-analysis at r-project.org Subject: RE: [R-meta] weight in rmv metafor Dear Dr. Viechtbauer, Thank you very much for your helpful reply. To be clear, I wonder if the multivariate approach will downweight
estimates
from a study that contains multiple effect sizes? I saw in a previous posts (https://stat.ethz.ch/pipermail/r-help/2017- February/444703.html), your said, "if you fit an appropriate model to the data at hand, the 'default weights' used by rma.mv() will be just fine." Does that mean that weights in rma.mv model would not impact the
estimated
fixed effects? I found that in the forest plot I generate through forest(), studies with multiple effect sizes tend to have bigger weights. I also used weights()
to
check the weights given to each effect sizes and found the same thing (see below for my code). I wonder if the weights for each effect sizes
presented
in forest plot is correct?
Thank you very much again for your help.
Best wishes
Huang
Vt <- impute_covariance_matrix(vi = try$v, #known correlation vector
cluster = try$ID, #study ID
r = 0.80) #assumed correlation
Mt <- rma.mv(yi=d, #effect size
V = Vt, #variance (tHIS IS WHAt CHANGES FROM HEmodel)
random = ~1 | ID/IID, #nesting structure
test= "t", #use t-tests
data=try,
method="REML")
weights(Mt)
From: Viechtbauer, Wolfgang (SP)
Sent: Sunday, June 7, 2020 6:56 AM
To: Huang Wu; r-sig-meta-analysis at r-project.org
Subject: RE: [R-meta] weight in rmv metafor
Dear Huang,
The weighting in rma.mv() models is more complex than in 'simple' models
fitted with rma() (same as rma.uni()). Depending on the particular model
you
are fitting with rma.mv(), the model-implied marginal var-cov matrix of
the
estimates (which you can see with vcov(<model>, type="obs")) is not just a diagonal matrix (as is the case for rma() models), but also involves covariances. The inverse of this matrix is the weight matrix, which is
then
also not just a diagonal matrix. For example, when some studies contribute multiple estimates, we might consider fitting a multilevel/multivariate model with random effects for studies and random effects for estimates within studies. When the
estimated
between-study variance component is greater than zero, then this implies a certain amount of covariance for effects from the same study. This leads
to
negative off-diagonal elements in the weight matrix for estimates from the same study. As a result, if the ith study contributes k_i estimates, it is not treated as if there were k_i independent studies. This has been discussed in the past on this mailing list, so you might
want
to search the archives for some relevant posts. Googling for: site:https://stat.ethz.ch/pipermail/r-sig-meta-analysis/ rma.mv weights brings up some relevant posts. Roughly speaking, the robust variance estimation method works as follows.
We
start with a 'working model' that is hopefully some decent approximation
to
the true model and that also captures the dependencies in the estimates. This model provides us with the estimates of the fixed effects. However, because we might not be able to capture all dependencies correctly with
this
working model, the var-cov matrix of the estimated fixed effects might not be correct. Hence, based on the working model, we can use the robust variance estimation method to obtain a var-cov matrix that is (asymptotically) correct and use this for testing the fixed effects. Therefore, the robust variance estimation method does not actually lead to changes in the estimated fixed effects. Those are determined based on the working model. That is why coef_test() will give you the exact same estimates of the fixed effects as those from the working model you use as input to this function. That is why it is important to use a working model that is at least some decent approximation. While the fixed effects estimates might even be unbiased when using a really poor working model, the estimates will not be very efficient. Best, Wolfgang
-----Original Message----- From: R-sig-meta-analysis [mailto:r-sig-meta-analysis-bounces at r-
project.org]
On Behalf Of Huang Wu Sent: Sunday, 07 June, 2020 0:37 To: r-sig-meta-analysis at r-project.org Subject: [R-meta] weight in rmv metafor Hi, all, I am conducting a multivariate meta-analysis using rmv in metaphor
package.
I wonder how rmv calculate weights for each effect sizes? I wonder if studies with more effect sizes get more total weights? I read an article saying "The robust variance estimation methods upweight effect sizes that are estimated with greater precision (due to
differences
in sample sizes, level of randomization, predictive power of covariates, etc.) and downweight estimates from studies that contribute multiple
effect
size estimates". (Kraft,Blazar, Hogan, 2018). Is that right? I am using rmv in metafor package to estimate the model and use coef_test
in
sandwich package to do significance test. Both give the same pooled
effect
sizes though. I understand that weights also impact pooled effect size estimate. In this case, how will robust variance estimation impact my
weight
mean effect size? Thanks Best wishes Huang
_______________________________________________ R-sig-meta-analysis mailing list R-sig-meta-analysis at r-project.org https://stat.ethz.ch/mailman/listinfo/r-sig-meta-analysis
1 day later
Hi all, I read this discussion and one question came to my mind : I also had some studies that contributed multiple effect sizes in the meta-analysis that I recently ran thanks to Dr Viechtbauer's advice. For now I only used the rma function, but should I have used rma.mv because of these stuides that had multiple effect sizes ? Thank you ! Norman De: "James Pustejovsky" <jepusto at gmail.com> ?: "Wolfgang Viechtbauer" <wolfgang.viechtbauer at maastrichtuniversity.nl> Cc: "r-sig-meta-analysis" <r-sig-meta-analysis at r-project.org>, "Huang Wu" <huang.wu at wmich.edu> Envoy?: Mercredi 10 Juin 2020 05:08:09 Objet: Re: [R-meta] weight in rmv metafor Hi Huang, I've written up some notes that add a bit of further intuition to the discussion that Wolfgang provided. The main case that I focus on is a model that is just a meta-analysis (i.e., no predictors) and that includes random effects to capture both between-study and within-study heterogeneity. I also say a little bit about meta-regression models with only study-level predictors. https://www.jepusto.com/weighting-in-multivariate-meta-analysis/ Best, James On Sun, Jun 7, 2020 at 4:11 PM Viechtbauer, Wolfgang (SP) <
wolfgang.viechtbauer at maastrichtuniversity.nl> wrote:
Of course the weights "impact the estimated fixed effects". But whether studies with multiple effect sizes tend to receive more weight depends on various factors, including the variances of the random effects and the sampling error (co)variances. A more detailed discussion around the way weighting works in rma.mv models can be found here: http://www.metafor-project.org/doku.php/tips:weights_in_rma.mv_models Note that weights(res, type="rowsum") currently only works in the 'devel' version of metafor, so follow https://wviechtb.github.io/metafor/#installation if you want to reproduce this part as well. I hope this clarifies things. Best, Wolfgang
-----Original Message----- From: Huang Wu [mailto:huang.wu at wmich.edu] Sent: Sunday, 07 June, 2020 19:52 To: Viechtbauer, Wolfgang (SP); r-sig-meta-analysis at r-project.org Subject: RE: [R-meta] weight in rmv metafor Dear Dr. Viechtbauer, Thank you very much for your helpful reply. To be clear, I wonder if the multivariate approach will downweight
estimates
from a study that contains multiple effect sizes? I saw in a previous posts (https://stat.ethz.ch/pipermail/r-help/2017- February/444703.html), your said, "if you fit an appropriate model to the data at hand, the 'default weights' used by rma.mv() will be just fine." Does that mean that weights in rma.mv model would not impact the
estimated
fixed effects? I found that in the forest plot I generate through forest(), studies with multiple effect sizes tend to have bigger weights. I also used weights()
to
check the weights given to each effect sizes and found the same thing (see below for my code). I wonder if the weights for each effect sizes
presented
in forest plot is correct? Thank you very much again for your help. Best wishes Huang Vt <- impute_covariance_matrix(vi = try$v, #known correlation vector cluster = try$ID, #study ID r = 0.80) #assumed correlation Mt <- rma.mv(yi=d, #effect size V = Vt, #variance (tHIS IS WHAt CHANGES FROM HEmodel) random = ~1 | ID/IID, #nesting structure test= "t", #use t-tests data=try, method="REML") weights(Mt) From: Viechtbauer, Wolfgang (SP) Sent: Sunday, June 7, 2020 6:56 AM To: Huang Wu; r-sig-meta-analysis at r-project.org Subject: RE: [R-meta] weight in rmv metafor Dear Huang, The weighting in rma.mv() models is more complex than in 'simple' models fitted with rma() (same as rma.uni()). Depending on the particular model
you
are fitting with rma.mv(), the model-implied marginal var-cov matrix of
the
estimates (which you can see with vcov(<model>, type="obs")) is not just a diagonal matrix (as is the case for rma() models), but also involves covariances. The inverse of this matrix is the weight matrix, which is
then
also not just a diagonal matrix. For example, when some studies contribute multiple estimates, we might consider fitting a multilevel/multivariate model with random effects for studies and random effects for estimates within studies. When the
estimated
between-study variance component is greater than zero, then this implies a certain amount of covariance for effects from the same study. This leads
to
negative off-diagonal elements in the weight matrix for estimates from the same study. As a result, if the ith study contributes k_i estimates, it is not treated as if there were k_i independent studies. This has been discussed in the past on this mailing list, so you might
want
to search the archives for some relevant posts. Googling for: site:https://stat.ethz.ch/pipermail/r-sig-meta-analysis/ rma.mv weights brings up some relevant posts. Roughly speaking, the robust variance estimation method works as follows.
We
start with a 'working model' that is hopefully some decent approximation
to
the true model and that also captures the dependencies in the estimates. This model provides us with the estimates of the fixed effects. However, because we might not be able to capture all dependencies correctly with
this
working model, the var-cov matrix of the estimated fixed effects might not be correct. Hence, based on the working model, we can use the robust variance estimation method to obtain a var-cov matrix that is (asymptotically) correct and use this for testing the fixed effects. Therefore, the robust variance estimation method does not actually lead to changes in the estimated fixed effects. Those are determined based on the working model. That is why coef_test() will give you the exact same estimates of the fixed effects as those from the working model you use as input to this function. That is why it is important to use a working model that is at least some decent approximation. While the fixed effects estimates might even be unbiased when using a really poor working model, the estimates will not be very efficient. Best, Wolfgang
-----Original Message----- From: R-sig-meta-analysis [mailto:r-sig-meta-analysis-bounces at r-
project.org]
On Behalf Of Huang Wu Sent: Sunday, 07 June, 2020 0:37 To: r-sig-meta-analysis at r-project.org Subject: [R-meta] weight in rmv metafor Hi, all, I am conducting a multivariate meta-analysis using rmv in metaphor
package.
I wonder how rmv calculate weights for each effect sizes? I wonder if studies with more effect sizes get more total weights? I read an article saying "The robust variance estimation methods upweight effect sizes that are estimated with greater precision (due to
differences
in sample sizes, level of randomization, predictive power of covariates, etc.) and downweight estimates from studies that contribute multiple
effect
size estimates". (Kraft,Blazar, Hogan, 2018). Is that right? I am using rmv in metafor package to estimate the model and use coef_test
in
sandwich package to do significance test. Both give the same pooled
effect
sizes though. I understand that weights also impact pooled effect size estimate. In this case, how will robust variance estimation impact my
weight
mean effect size? Thanks Best wishes Huang
_______________________________________________ R-sig-meta-analysis mailing list R-sig-meta-analysis at r-project.org https://stat.ethz.ch/mailman/listinfo/r-sig-meta-analysis
_______________________________________________ R-sig-meta-analysis mailing list R-sig-meta-analysis at r-project.org https://stat.ethz.ch/mailman/listinfo/r-sig-meta-analysis
Dear Norman, If you only used rma(), then this is not correct. rma.mv() with an appropriately specified model (plus clubSandwich::coef_test() if the working model is only an approximation and doesn't cover all dependencies) would be more appropriate. Best, Wolfgang
-----Original Message----- From: Norman DAURELLE [mailto:norman.daurelle at agroparistech.fr] Sent: Thursday, 11 June, 2020 14:13 To: r-sig-meta-analysis Cc: Viechtbauer, Wolfgang (SP) Subject: Re: [R-meta] weight in rmv metafor Hi all, I read this discussion and one question came to my mind : I also had some studies that contributed multiple effect sizes in the meta-analysis that I recently ran thanks to Dr Viechtbauer's advice. For now I only used the rma function, but should I have used rma.mv because of these stuides that had multiple effect sizes ? Thank you ! Norman
________________________________________ De: "James Pustejovsky" <jepusto at gmail.com> ?: "Wolfgang Viechtbauer" <wolfgang.viechtbauer at maastrichtuniversity.nl> Cc: "r-sig-meta-analysis" <r-sig-meta-analysis at r-project.org>, "Huang Wu" <huang.wu at wmich.edu> Envoy?: Mercredi 10 Juin 2020 05:08:09 Objet: Re: [R-meta] weight in rmv metafor Hi Huang, I've written up some notes that add a bit of further intuition to the discussion that Wolfgang provided. The main case that I focus on is a model that is just a meta-analysis (i.e., no predictors) and that includes random effects to capture both between-study and within-study heterogeneity. I also say a little bit about meta-regression models with only study-level predictors. https://www.jepusto.com/weighting-in-multivariate-meta-analysis/ Best, James On Sun, Jun 7, 2020 at 4:11 PM Viechtbauer, Wolfgang (SP) < wolfgang.viechtbauer at maastrichtuniversity.nl> wrote: Of course the weights "impact the estimated fixed effects". But whether studies with multiple effect sizes tend to receive more weight depends on various factors, including the variances of the random effects and the sampling error (co)variances. A more detailed discussion around the way weighting works in rma.mv models can be found here: http://www.metafor-project.org/doku.php/tips:weights_in_rma.mv_models Note that weights(res, type="rowsum") currently only works in the 'devel' version of metafor, so follow https://wviechtb.github.io/metafor/#installation if you want to reproduce this part as well. I hope this clarifies things. Best, Wolfgang
Thank you. I am not sure I understand exactly what you mean by " i f the working model is only an approximation and doesn't cover all dependencies ". Could you please explain it ? For now I used the rma() function to synthesize the available literature existing on the blackleg - oil seed rape disease-yield relationship, using slopes as effect-sizes. the models that gave me the slopes I used in the meta-analysis are all Y = a + bX, simple linear regressions with Y being the yield and X being the diseqse severity. So my slopes, b, are all negative, and I have obtained a "summary" effect size through the rma() function. But I indeed have two studies that for now contribute to most of the effect-sizes that are included in my meta-analysis. So why exactly is it necessary to use the rma.mv() function ? What exactly does the "multivariate" qualificative refer to ? Thank you, Norman. De: "Wolfgang Viechtbauer" <wolfgang.viechtbauer at maastrichtuniversity.nl> ?: "Norman DAURELLE" <norman.daurelle at agroparistech.fr>, "r-sig-meta-analysis" <r-sig-meta-analysis at r-project.org> Envoy?: Jeudi 11 Juin 2020 22:34:55 Objet: RE: [R-meta] weight in rmv metafor Dear Norman, If you only used rma(), then this is not correct. rma.mv() with an appropriately specified model (plus clubSandwich::coef_test() if the working model is only an approximation and doesn't cover all dependencies) would be more appropriate. Best, Wolfgang
-----Original Message----- From: Norman DAURELLE [mailto:norman.daurelle at agroparistech.fr] Sent: Thursday, 11 June, 2020 14:13 To: r-sig-meta-analysis Cc: Viechtbauer, Wolfgang (SP) Subject: Re: [R-meta] weight in rmv metafor Hi all, I read this discussion and one question came to my mind : I also had some studies that contributed multiple effect sizes in the meta-analysis that I recently ran thanks to Dr Viechtbauer's advice. For now I only used the rma function, but should I have used rma.mv because of these stuides that had multiple effect sizes ? Thank you ! Norman
________________________________________ De: "James Pustejovsky" <jepusto at gmail.com> ?: "Wolfgang Viechtbauer" <wolfgang.viechtbauer at maastrichtuniversity.nl> Cc: "r-sig-meta-analysis" <r-sig-meta-analysis at r-project.org>, "Huang Wu" <huang.wu at wmich.edu> Envoy?: Mercredi 10 Juin 2020 05:08:09 Objet: Re: [R-meta] weight in rmv metafor Hi Huang, I've written up some notes that add a bit of further intuition to the discussion that Wolfgang provided. The main case that I focus on is a model that is just a meta-analysis (i.e., no predictors) and that includes random effects to capture both between-study and within-study heterogeneity. I also say a little bit about meta-regression models with only study-level predictors. https://www.jepusto.com/weighting-in-multivariate-meta-analysis/ Best, James On Sun, Jun 7, 2020 at 4:11 PM Viechtbauer, Wolfgang (SP) < wolfgang.viechtbauer at maastrichtuniversity.nl> wrote: Of course the weights "impact the estimated fixed effects". But whether studies with multiple effect sizes tend to receive more weight depends on various factors, including the variances of the random effects and the sampling error (co)variances. A more detailed discussion around the way weighting works in rma.mv models can be found here: http://www.metafor-project.org/doku.php/tips:weights_in_rma.mv_models Note that weights(res, type="rowsum") currently only works in the 'devel' version of metafor, so follow https://wviechtb.github.io/metafor/#installation if you want to reproduce this part as well. I hope this clarifies things. Best, Wolfgang
Dear Norman, To give a simple example: When (some of the) studies contribute multiple estimates, the dataset has a multilevel structure (with estimates nested within studies). A common way to deal with this is to fit a multilevel model with random effects for studies and estimates within studies. Like this: http://www.metafor-project.org/doku.php/analyses:konstantopoulos2011 However, multiple estimates from the same study are actually often computed based on the same sample of subjects. In that case, the sampling errors are also correlated. The multilevel model does not capture this. For this, one would ideally want to fit a model that also allows for correlated sampling errors. Like this: http://www.metafor-project.org/doku.php/analyses:berkey1998 However, computing the covariances between the sampling errors within a study is difficult and requires information that is often not available. We can ignore those correlations and use the multilevel model as a working model that is an approximation to the model that also accounts for correlated sampling errors. After fitting the multilevel model with rma.mv(), one can then use cluster robust inference methods to 'fix things up'. Quite a bit of this has been discussed at length in previous posts on this mailing list. You might want to search the archives for some of these posts. Best, Wolfgang
-----Original Message----- From: Norman DAURELLE [mailto:norman.daurelle at agroparistech.fr] Sent: Thursday, 11 June, 2020 15:05 To: Viechtbauer, Wolfgang (SP) Cc: r-sig-meta-analysis Subject: Re: [R-meta] weight in rmv metafor Thank you. I am not sure I understand exactly what you mean by " if the working model is only an approximation and doesn't cover all dependencies ". Could you please explain it ? For now I used the rma() function to synthesize the available literature existing on the blackleg - oil seed rape disease-yield relationship, using slopes as effect-sizes. the models that gave me the slopes I used in the meta-analysis are all Y = a + bX, simple linear regressions with Y being the yield and X being the diseqse severity. So my slopes, b, are all negative, and I have obtained a "summary" effect size through the rma() function. But I indeed have two studies that for now contribute to most of the effect- sizes that are included in my meta-analysis. So why exactly is it necessary to use the rma.mv() function ? What exactly does the "multivariate" qualificative refer to ? Thank you, Norman.
________________________________________ De: "Wolfgang Viechtbauer" <wolfgang.viechtbauer at maastrichtuniversity.nl> ?: "Norman DAURELLE" <norman.daurelle at agroparistech.fr>, "r-sig-meta- analysis" <r-sig-meta-analysis at r-project.org> Envoy?: Jeudi 11 Juin 2020 22:34:55 Objet: RE: [R-meta] weight in rmv metafor Dear Norman, If you only used rma(), then this is not correct. rma.mv() with an appropriately specified model (plus clubSandwich::coef_test() if the working model is only an approximation and doesn't cover all dependencies) would be more appropriate. Best, Wolfgang -----Original Message----- From: Norman DAURELLE [mailto:norman.daurelle at agroparistech.fr] Sent: Thursday, 11 June, 2020 14:13 To: r-sig-meta-analysis Cc: Viechtbauer, Wolfgang (SP) Subject: Re: [R-meta] weight in rmv metafor Hi all, I read this discussion and one question came to my mind : I also had some studies that contributed multiple effect sizes in the meta-analysis that I recently ran thanks to Dr Viechtbauer's advice. For now I only used the rma function, but should I have used rma.mv because of these stuides that had multiple effect sizes ? Thank you ! Norman ________________________________________ De: "James Pustejovsky" <jepusto at gmail.com> ?: "Wolfgang Viechtbauer" <wolfgang.viechtbauer at maastrichtuniversity.nl> Cc: "r-sig-meta-analysis" <r-sig-meta-analysis at r-project.org>, "Huang Wu" <huang.wu at wmich.edu> Envoy?: Mercredi 10 Juin 2020 05:08:09 Objet: Re: [R-meta] weight in rmv metafor Hi Huang, I've written up some notes that add a bit of further intuition to the discussion that Wolfgang provided. The main case that I focus on is a model that is just a meta-analysis (i.e., no predictors) and that includes random effects to capture both between-study and within-study heterogeneity. I also say a little bit about meta-regression models with only study-level predictors. https://www.jepusto.com/weighting-in-multivariate-meta-analysis/ Best, James On Sun, Jun 7, 2020 at 4:11 PM Viechtbauer, Wolfgang (SP) < wolfgang.viechtbauer at maastrichtuniversity.nl> wrote: Of course the weights "impact the estimated fixed effects". But whether studies with multiple effect sizes tend to receive more weight depends on various factors, including the variances of the random effects and the sampling error (co)variances. A more detailed discussion around the way weighting works in rma.mv models can be found here: http://www.metafor-project.org/doku.php/tips:weights_in_rma.mv_models Note that weights(res, type="rowsum") currently only works in the 'devel' version of metafor, so follow https://wviechtb.github.io/metafor/#installation if you want to reproduce this part as well. I hope this clarifies things. Best, Wolfgang
Dear Norman You may want to check reproducible examples of my previous work on this exact application context as a starting point. https://emdelponte.github.io/paper-white-mold-meta-analysis/ https://emdelponte.github.io/paper-FHB-yield-loss/code_meta_analysis.html Emerson On Thu, 11 Jun 2020 at 10:06 Norman DAURELLE <
norman.daurelle at agroparistech.fr> wrote:
Thank you. I am not sure I understand exactly what you mean by " i f the working model is only an approximation and doesn't cover all dependencies ". Could you please explain it ? For now I used the rma() function to synthesize the available literature existing on the blackleg - oil seed rape disease-yield relationship, using slopes as effect-sizes. the models that gave me the slopes I used in the meta-analysis are all Y = a + bX, simple linear regressions with Y being the yield and X being the diseqse severity. So my slopes, b, are all negative, and I have obtained a "summary" effect size through the rma() function. But I indeed have two studies that for now contribute to most of the effect-sizes that are included in my meta-analysis. So why exactly is it necessary to use the rma.mv() function ? What exactly does the "multivariate" qualificative refer to ? Thank you, Norman. De: "Wolfgang Viechtbauer" <wolfgang.viechtbauer at maastrichtuniversity.nl> ?: "Norman DAURELLE" <norman.daurelle at agroparistech.fr>, "r-sig-meta-analysis" <r-sig-meta-analysis at r-project.org> Envoy?: Jeudi 11 Juin 2020 22:34:55 Objet: RE: [R-meta] weight in rmv metafor Dear Norman, If you only used rma(), then this is not correct. rma.mv() with an appropriately specified model (plus clubSandwich::coef_test() if the working model is only an approximation and doesn't cover all dependencies) would be more appropriate. Best, Wolfgang
-----Original Message----- From: Norman DAURELLE [mailto:norman.daurelle at agroparistech.fr] Sent: Thursday, 11 June, 2020 14:13 To: r-sig-meta-analysis Cc: Viechtbauer, Wolfgang (SP) Subject: Re: [R-meta] weight in rmv metafor Hi all, I read this discussion and one question came to my mind : I also had some studies that contributed multiple effect sizes in the meta-analysis that
I
recently ran thanks to Dr Viechtbauer's advice. For now I only used the rma function, but should I have used rma.mv
because
of these stuides that had multiple effect sizes ? Thank you ! Norman
________________________________________ De: "James Pustejovsky" <jepusto at gmail.com> ?: "Wolfgang Viechtbauer" <wolfgang.viechtbauer at maastrichtuniversity.nl> Cc: "r-sig-meta-analysis" <r-sig-meta-analysis at r-project.org>, "Huang
Wu"
<huang.wu at wmich.edu> Envoy?: Mercredi 10 Juin 2020 05:08:09 Objet: Re: [R-meta] weight in rmv metafor Hi Huang, I've written up some notes that add a bit of further intuition to the discussion that Wolfgang provided. The main case that I focus on is a
model
that is just a meta-analysis (i.e., no predictors) and that includes
random
effects to capture both between-study and within-study heterogeneity. I also say a little bit about meta-regression models with only study-level predictors. https://www.jepusto.com/weighting-in-multivariate-meta-analysis/ Best, James On Sun, Jun 7, 2020 at 4:11 PM Viechtbauer, Wolfgang (SP) < wolfgang.viechtbauer at maastrichtuniversity.nl> wrote:
Of course the weights "impact the estimated fixed effects". But whether studies with multiple effect sizes tend to receive more weight depends
on
various factors, including the variances of the random effects and the sampling error (co)variances. A more detailed discussion around the way weighting works in rma.mv models can be found here: http://www.metafor-project.org/doku.php/tips:weights_in_rma.mv_models Note that weights(res, type="rowsum") currently only works in the
'devel'
version of metafor, so follow https://wviechtb.github.io/metafor/#installation if you want to
reproduce
this part as well. I hope this clarifies things. Best, Wolfgang
[[alternative HTML version deleted]]
_______________________________________________ R-sig-meta-analysis mailing list R-sig-meta-analysis at r-project.org https://stat.ethz.ch/mailman/listinfo/r-sig-meta-analysis
*Emerson M. Del Ponte* Universidade Federal de Vi?osa, Brazil Chair of the Graduate Studies <http://www.dfp.ufv.br/graduate/> in Plant Pathology EIC for Tropical Plant Pathology <http://sbfitopatologia.org.br/tpp/> Co-Founder of Open Plant Pathology <https://www.openplantpathology.org/> My websites: Twitter <https://twitter.com/edelponte> | GitHub <https://github.com/emdelponte> | Google Scholar <https://scholar.google.com.br/citations?user=a1rPnI0AAAAJ> | ResearchGate <https://www.researchgate.net/profile/Emerson_Del_Ponte> Tel +55 31 36124830 [[alternative HTML version deleted]]
Dear all, Dr Viechtbauer, Dr Del Ponte,thank you for your answers ! I will look into what you advised me to read, and also go and read more of the archive.If I have more questions I will come back and ask them.Once again, thank you for developing the package metafor in R Dr Viechtbauer, and for creating this mailing list.Have a nice week-end,Norman ----- Mail d'origine ----- De: Emerson Del Ponte <delponte at ufv.br> ?: Norman DAURELLE <norman.daurelle at agroparistech.fr> Cc: Wolfgang Viechtbauer <wolfgang.viechtbauer at maastrichtuniversity.nl>, r-sig-meta-analysis <r-sig-meta-analysis at r-project.org> Envoy?: Thu, 11 Jun 2020 15:43:05 +0200 (CEST) Objet: Re: [R-meta] weight in rmv metafor Dear Norman You may want to check reproducible examples of my previous work on this exact application context as a starting point. https://emdelponte.github.io/paper-white-mold-meta-analysis/ https://emdelponte.github.io/paper-FHB-yield-loss/code_meta_analysis.html Emerson
On Thu, 11 Jun 2020 at 10:06 Norman DAURELLE <norman.daurelle at agroparistech.fr> wrote:
Thank you. I am not sure I understand exactly what you mean by " i f the working model is only an approximation and doesn't cover all dependencies ". Could you please explain it ? For now I used the rma() function to synthesize the available literature existing on the blackleg - oil seed rape disease-yield relationship, using slopes as effect-sizes. the models that gave me the slopes I used in the meta-analysis are all Y = a + bX, simple linear regressions with Y being the yield and X being the diseqse severity. So my slopes, b, are all negative, and I have obtained a "summary" effect size through the rma() function. But I indeed have two studies that for now contribute to most of the effect-sizes that are included in my meta-analysis. So why exactly is it necessary to use the rma.mv() function ? What exactly does the "multivariate" qualificative refer to ? Thank you, Norman. De: "Wolfgang Viechtbauer" <wolfgang.viechtbauer at maastrichtuniversity.nl> ?: "Norman DAURELLE" <norman.daurelle at agroparistech.fr>, "r-sig-meta-analysis" <r-sig-meta-analysis at r-project.org> Envoy?: Jeudi 11 Juin 2020 22:34:55 Objet: RE: [R-meta] weight in rmv metafor Dear Norman, If you only used rma(), then this is not correct. rma.mv() with an appropriately specified model (plus clubSandwich::coef_test() if the working model is only an approximation and doesn't cover all dependencies) would be more appropriate. Best, Wolfgang
-----Original Message-----
From: Norman DAURELLE [mailto:norman.daurelle at agroparistech.fr]
Sent: Thursday, 11 June, 2020 14:13
To: r-sig-meta-analysis
Cc: Viechtbauer, Wolfgang (SP)
Subject: Re: [R-meta] weight in rmv metafor
Hi all,
I read this discussion and one question came to my mind : I also had some
studies that contributed multiple effect sizes in the meta-analysis that I
recently ran thanks to Dr Viechtbauer's advice.
For now I only used the rma function, but should I have used rma.mv because
of these stuides that had multiple effect sizes ?
Thank you !
Norman
________________________________________
De: "James Pustejovsky" <jepusto at gmail.com>
?: "Wolfgang Viechtbauer" <wolfgang.viechtbauer at maastrichtuniversity.nl>
Cc: "r-sig-meta-analysis" <r-sig-meta-analysis at r-project.org>, "Huang Wu"
<huang.wu at wmich.edu>
Envoy?: Mercredi 10 Juin 2020 05:08:09
Objet: Re: [R-meta] weight in rmv metafor
Hi Huang,
I've written up some notes that add a bit of further intuition to the
discussion that Wolfgang provided. The main case that I focus on is a model
that is just a meta-analysis (i.e., no predictors) and that includes random
effects to capture both between-study and within-study heterogeneity. I
also say a little bit about meta-regression models with only study-level
predictors.
Best,
James
On Sun, Jun 7, 2020 at 4:11 PM Viechtbauer, Wolfgang (SP) <
wolfgang.viechtbauer at maastrichtuniversity.nl> wrote:
Of course the weights "impact the estimated fixed effects". But whether
studies with multiple effect sizes tend to receive more weight depends on
various factors, including the variances of the random effects and the
sampling error (co)variances.
A more detailed discussion around the way weighting works in rma.mv
models can be found here:
Note that weights(res, type="rowsum") currently only works in the 'devel'
version of metafor, so follow
https://wviechtb.github.io/metafor/#installation if you want to reproduce
this part as well.
I hope this clarifies things.
Best,
Wolfgang
_______________________________________________ R-sig-meta-analysis mailing list R-sig-meta-analysis at r-project.org https://stat.ethz.ch/mailman/listinfo/r-sig-meta-analysis
Emerson M. Del Ponte Universidade Federal de Vi?osa, Brazil Chair of the Graduate Studies in Plant PathologyEIC for Tropical Plant Pathology Co-Founder of Open Plant PathologyMy websites: Twitter | GitHub | Google Scholar | ResearchGateTel +55 31 36124830 [[alternative HTML version deleted]]
3 days later
Dear all, Dr Viechtbauer, Dr Del Ponte,I read what you suggested, and went through the archives of the mailing list, and even though some threads were dealing with similar questions, I didn't really find the answers I was looking for there.I have multiple questions : 1) How exactly is calculated the weight of each estimate in the function rma.mv() ?2) (this question might be answered too when answering the first one but...) Why does that function (rma.mv) attribute more weight to estimates coming from studies that have multiple estimates relatively to ones that come from studies that only have one estimate ?3) I tried to produce a reproducible example of my code, would it be possible to tell me if I got things right ?my dataset is this table (I didn't know if I could attach anything) :
dat
author year estimate std_error p_value r number_of_obs
1 Ballinger et al 1988 -19.3450 5.8220 4.638e-03 -0.6511331 17
2 Matthieu & Norman 2020 -2.7398 0.8103 7.424e-04 -0.0920000 1352
3 Khangura 2011 -1.9610 1.3470 1.536e-01 -0.2298655 40
4 Kutcher 1990 -21.1350 3.9330 1.420e-05 -0.7321321 27
5 Steed 2007 -13.9700 3.2500 7.355e-04 -0.7542836 16
6 Steed 2007 -31.9440 4.7650 2.765e-04 -0.9301781 9
7 Steed 2007 -4.2780 3.0940 1.883e-01 -0.3466844 16
8 Sprague et al 2010 -21.9880 5.3010 7.562e-04 -0.7198487 18
9 Upadhaya et al 2019 -43.6170 3.3440 1.133e-06 -0.9772861 10
10 Khangura et al 2005 -5.3500 NA 5.000e-02 -0.4200000 29
11 Khangura et al 2005 -10.5600 NA 1.000e-03 -0.5700000 29
12 Khangura et al 2005 -9.9700 NA 1.300e-02 -0.4600000 29
13 Khangura et al 2005 -5.4500 NA 1.100e-02 -0.4700000 29
14 Khangura et al 2005 -22.7500 NA 2.800e-02 -0.4200000 29
15 Khangura et al 2005 -16.8300 NA 2.200e-02 -0.4300000 29
16 Khangura et al 2005 -9.2100 NA 3.900e-02 -0.3900000 29
V1 <- c("Ballinger et al", "Matthieu & Norman", "Khangura", "Kutcher", "Steed", "Steed", "Steed", "Sprague et al",
"Upadhaya et al", "Khangura et al", "Khangura et al", "Khangura et al", "Khangura et al", "Khangura et al",
"Khangura et al", "Khangura et al" )
V2 <- c(1988, 2020, 2011, 1990, 2007, 2007, 2007, 2010, 2019, 2005, 2005, 2005, 2005, 2005, 2005, 2005)
V3 <- c(-19.3450, -2.7398, -1.9610, -21.1350, -13.9700, -31.9440, -4.2780, -21.9880, -43.6170, -5.3500,
-10.5600, -9.9700, -5.4500, -22.7500, -16.8300, -9.2100)
V4 <- c(5.8220, 0.8103, 1.3470, 3.9330, 3.2500, 4.7650, 3.0940, 5.3010, 3.3440, NA, NA, NA, NA, NA, NA, NA)
V5 <- c(4.638e-03, 7.424e-04, 1.536e-01, 1.420e-05, 7.355e-04, 2.765e-04, 1.883e-01, 7.562e-04, 1.133e-06,
5.000e-02, 1.000e-03, 1.300e-02, 1.100e-02, 2.800e-02, 2.200e-02, 3.900e-02)
V6 <- c(-0.6511331, -0.0920000, -0.2298655, -0.7321321, -0.7542836, -0.9301781, -0.3466844, -0.7198487,
-0.9772861, -0.4200000, -0.5700000, -0.4600000, -0.4700000, -0.4200000, -0.4300000, -0.3900000)
V7 <- c(17, 1352, 40, 27, 16, 9, 16, 18, 10, 29, 29, 29, 29, 29, 29, 29)dat <- cbind(V1, V2, V3, V4, V5, V6, V7)dat <- as.data.frame(dat)dat$V1 <- as.character(dat$V1)
dat$V2 <- as.integer(as.character(dat$V2))
dat$V3 <- as.numeric(as.character(dat$V3))
dat$V4 <- as.numeric(as.character(dat$V4))
dat$V5 <- as.numeric(as.character(dat$V5))
dat$V6 <- as.numeric(as.character(dat$V6))
dat$V7 <- as.numeric(as.character(dat$V7))str(dat)dat <- dat %>% rename(author = "V1",
year = "V2",
estimate = "V3",
std_error = "V4",
p_value = "V5",
r = "V6",
number_of_obs = "V7")
for (i in 1:nrow(dat)){
if (is.na(dat$std_error[i]) == TRUE ){
dat$std_error[i] <- abs(dat$estimate[i]) / qt(dat$p_value[i]/2,
df=dat$number_of_obs-2, lower.tail=FALSE)
}
}
res <- rma.mv(yi = dat$estimate, V = (dat$std_error)**2, random = ~ 1 | author, data=dat)coef_test(res, vcov="CR2")forest(res, addcred = TRUE, showweights = TRUE, header = TRUE,
order = "obs", col = "blue", slab = dat$author)
funnel(res)
ranktest(res)The for loop that I use (and in which I use a formula that Dr Viechtbauer gave me in a previous answer) to calculate the standard errors for estimates that I didn't calculate myself gives me an error message, but still gives me values.
The error message is :
Warning messages:
1: In data$std_error[i] <- abs(data$estimate[i])/qt(data$p_value[i]/2, :
number of items to replace is not a multiple of replacement length
and it is repeated on 7 lines.I am not entirely sure, but I think this comes from the fact that values are set to NAs before the loop is run and are replaced by values, so replacing an item of length 0 with an item of length 1 I believe.
I hope this example can be run simply with a copy / paste, I think it should.
Did I do things correctly ? If not what should I modify ?Thank you !Norman
----- Mail d'origine -----
De: Viechtbauer, Wolfgang (SP) <wolfgang.viechtbauer at maastrichtuniversity.nl>
?: Norman DAURELLE <norman.daurelle at agroparistech.fr>
Cc: r-sig-meta-analysis <r-sig-meta-analysis at r-project.org>
Envoy?: Thu, 11 Jun 2020 15:33:02 +0200 (CEST)
Objet: RE: [R-meta] weight in rmv metafor
Dear Norman,
To give a simple example: When (some of the) studies contribute multiple estimates, the dataset has a multilevel structure (with estimates nested within studies). A common way to deal with this is to fit a multilevel model with random effects for studies and estimates within studies. Like this:
http://www.metafor-project.org/doku.php/analyses:konstantopoulos2011
However, multiple estimates from the same study are actually often computed based on the same sample of subjects. In that case, the sampling errors are also correlated. The multilevel model does not capture this. For this, one would ideally want to fit a model that also allows for correlated sampling errors. Like this:
http://www.metafor-project.org/doku.php/analyses:berkey1998
However, computing the covariances between the sampling errors within a study is difficult and requires information that is often not available.
We can ignore those correlations and use the multilevel model as a working model that is an approximation to the model that also accounts for correlated sampling errors. After fitting the multilevel model with rma.mv(), one can then use cluster robust inference methods to 'fix things up'.
Quite a bit of this has been discussed at length in previous posts on this mailing list. You might want to search the archives for some of these posts.
Best,
Wolfgang
-----Original Message----- From: Norman DAURELLE [mailto:norman.daurelle at agroparistech.fr] Sent: Thursday, 11 June, 2020 15:05 To: Viechtbauer, Wolfgang (SP) Cc: r-sig-meta-analysis Subject: Re: [R-meta] weight in rmv metafor Thank you. I am not sure I understand exactly what you mean by " if the working model is only an approximation and doesn't cover all dependencies ". Could you please explain it ? For now I used the rma() function to synthesize the available literature existing on the blackleg - oil seed rape disease-yield relationship, using slopes as effect-sizes. the models that gave me the slopes I used in the meta-analysis are all Y = a + bX, simple linear regressions with Y being the yield and X being the diseqse severity. So my slopes, b, are all negative, and I have obtained a "summary" effect size through the rma() function. But I indeed have two studies that for now contribute to most of the effect- sizes that are included in my meta-analysis. So why exactly is it necessary to use the rma.mv() function ? What exactly does the "multivariate" qualificative refer to ? Thank you, Norman.
________________________________________ De: "Wolfgang Viechtbauer" <wolfgang.viechtbauer at maastrichtuniversity.nl> ?: "Norman DAURELLE" <norman.daurelle at agroparistech.fr>, "r-sig-meta- analysis" <r-sig-meta-analysis at r-project.org> Envoy?: Jeudi 11 Juin 2020 22:34:55 Objet: RE: [R-meta] weight in rmv metafor Dear Norman, If you only used rma(), then this is not correct. rma.mv() with an appropriately specified model (plus clubSandwich::coef_test() if the working model is only an approximation and doesn't cover all dependencies) would be more appropriate. Best, Wolfgang -----Original Message----- From: Norman DAURELLE [mailto:norman.daurelle at agroparistech.fr] Sent: Thursday, 11 June, 2020 14:13 To: r-sig-meta-analysis Cc: Viechtbauer, Wolfgang (SP) Subject: Re: [R-meta] weight in rmv metafor Hi all, I read this discussion and one question came to my mind : I also had some studies that contributed multiple effect sizes in the meta-analysis that I recently ran thanks to Dr Viechtbauer's advice. For now I only used the rma function, but should I have used rma.mv because of these stuides that had multiple effect sizes ? Thank you ! Norman ________________________________________ De: "James Pustejovsky" <jepusto at gmail.com> ?: "Wolfgang Viechtbauer" <wolfgang.viechtbauer at maastrichtuniversity.nl> Cc: "r-sig-meta-analysis" <r-sig-meta-analysis at r-project.org>, "Huang Wu" <huang.wu at wmich.edu> Envoy?: Mercredi 10 Juin 2020 05:08:09 Objet: Re: [R-meta] weight in rmv metafor Hi Huang, I've written up some notes that add a bit of further intuition to the discussion that Wolfgang provided. The main case that I focus on is a model that is just a meta-analysis (i.e., no predictors) and that includes random effects to capture both between-study and within-study heterogeneity. I also say a little bit about meta-regression models with only study-level predictors. https://www.jepusto.com/weighting-in-multivariate-meta-analysis/ Best, James On Sun, Jun 7, 2020 at 4:11 PM Viechtbauer, Wolfgang (SP) < wolfgang.viechtbauer at maastrichtuniversity.nl> wrote: Of course the weights "impact the estimated fixed effects". But whether studies with multiple effect sizes tend to receive more weight depends on various factors, including the variances of the random effects and the sampling error (co)variances. A more detailed discussion around the way weighting works in rma.mv models can be found here: http://www.metafor-project.org/doku.php/tips:weights_in_rma.mv_models Note that weights(res, type="rowsum") currently only works in the 'devel' version of metafor, so follow https://wviechtb.github.io/metafor/#installation if you want to reproduce this part as well. I hope this clarifies things. Best, Wolfgang
Dear Normand, Did you read this? http://www.metafor-project.org/doku.php/tips:weights_in_rma.mv_models It answers 1) and 2). Let's take an even simpler example: library(metafor) dat <- data.frame(study = c(1,1,2,2,3,4), id = 1:6, yi = c(.1,.3,.2,.4,.6,.8), vi = rep(.01,6)) dat # studies 1 and 2 contribute 2 estimates each, studies 3 and 4 a single estimate res <- rma.mv(yi, vi, random = ~ 1 | study/id, data=dat) res weights(res) The output of this is:
weights(res)
1 2 3 4 5 6 20.485124 20.485124 20.485124 20.485124 9.029753 9.029753 So maybe this is what you are observing - that the values along the diagonal of the weight matrix are larger for the studies with 2 estimates. But those weights are just based on the diagonal of the weight matrix. One really needs to take the whole weight matrix into consideration. As described on that page, the actual weights assigned to the estimates when computing the weight average of the estimates are the row sums of the weight matrix. You can get this with: W <- weights(res, type="matrix") rowSums(W) / sum(W) * 100 This yields:
rowSums(W) / sum(W) * 100
1 2 3 4 5 6 13.34116 13.34116 13.34116 13.34116 23.31768 23.31768 So actually, the weight assigned to the first and second estimate of the studies with 2 estimates is smaller than the weight assigned to the single estimate of the studies that contribute a single estimate. Not sure what the problem is with 3) but I don't think your data are needed to clarify the issue. Best, Wolfgang
-----Original Message----- From: Norman DAURELLE [mailto:norman.daurelle at agroparistech.fr] Sent: Monday, 15 June, 2020 10:10 To: Viechtbauer, Wolfgang (SP) Cc: r-sig-meta-analysis Subject: RE: [R-meta] weight in rmv metafor Dear all, Dr Viechtbauer, Dr Del Ponte, I read what you suggested, and went through the archives of the mailing list, and even though some threads were dealing with similar questions, I didn't really find the answers I was looking for there. I have multiple questions : 1) How exactly is calculated the weight of each estimate in the function rma.mv() ? 2) (this question might be answered too when answering the first one but...) Why does that function (rma.mv) attribute more weight to estimates coming from studies that have multiple estimates relatively to ones that come from studies that only have one estimate ? 3) I tried to produce a reproducible example of my code, would it be possible to tell me if I got things right ? my dataset is this table (I didn't know if I could attach anything) :
dat
author year estimate std_error p_value r
number_of_obs
1 Ballinger et al 1988 -19.3450 5.8220 4.638e-03 -0.6511331
17
2 Matthieu & Norman 2020 -2.7398 0.8103 7.424e-04 -0.0920000
1352
3 Khangura 2011 -1.9610 1.3470 1.536e-01 -0.2298655
40
4 Kutcher 1990 -21.1350 3.9330 1.420e-05 -0.7321321
27
5 Steed 2007 -13.9700 3.2500 7.355e-04 -0.7542836
16
6 Steed 2007 -31.9440 4.7650 2.765e-04 -0.9301781
9
7 Steed 2007 -4.2780 3.0940 1.883e-01 -0.3466844
16
8 Sprague et al 2010 -21.9880 5.3010 7.562e-04 -0.7198487
18
9 Upadhaya et al 2019 -43.6170 3.3440 1.133e-06 -0.9772861
10
10 Khangura et al 2005 -5.3500 NA 5.000e-02 -0.4200000
29
11 Khangura et al 2005 -10.5600 NA 1.000e-03 -0.5700000
29
12 Khangura et al 2005 -9.9700 NA 1.300e-02 -0.4600000
29
13 Khangura et al 2005 -5.4500 NA 1.100e-02 -0.4700000
29
14 Khangura et al 2005 -22.7500 NA 2.800e-02 -0.4200000
29
15 Khangura et al 2005 -16.8300 NA 2.200e-02 -0.4300000
29
16 Khangura et al 2005 -9.2100 NA 3.900e-02 -0.3900000
29
V1 <- c("Ballinger et al", "Matthieu & Norman", "Khangura", "Kutcher",
"Steed", "Steed", "Steed", "Sprague et al",
??????? "Upadhaya et al", "Khangura et al", "Khangura et al", "Khangura et
al", "Khangura et al", "Khangura et al",
??????? "Khangura et al", "Khangura et al" )
V2 <- c(1988, 2020, 2011, 1990, 2007, 2007, 2007, 2010, 2019, 2005, 2005,
2005, 2005, 2005, 2005, 2005)
V3 <- c(-19.3450, -2.7398, -1.9610, -21.1350, -13.9700, -31.9440, -4.2780, -
21.9880, -43.6170, -5.3500,
??????? -10.5600, -9.9700, -5.4500, -22.7500, -16.8300, -9.2100)
V4 <- c(5.8220, 0.8103, 1.3470, 3.9330, 3.2500, 4.7650, 3.0940, 5.3010,
3.3440, NA, NA, NA, NA, NA, NA, NA)
V5 <- c(4.638e-03, 7.424e-04, 1.536e-01, 1.420e-05, 7.355e-04, 2.765e-04,
1.883e-01, 7.562e-04, 1.133e-06,
??????? 5.000e-02, 1.000e-03, 1.300e-02, 1.100e-02, 2.800e-02, 2.200e-02,
3.900e-02)
V6 <- c(-0.6511331, -0.0920000, -0.2298655, -0.7321321, -0.7542836, -
0.9301781, -0.3466844, -0.7198487,
??????? -0.9772861, -0.4200000, -0.5700000, -0.4600000, -0.4700000, -
0.4200000, -0.4300000, -0.3900000)
V7 <- c(17, 1352, 40, 27, 16, 9, 16, 18, 10, 29, 29, 29, 29, 29, 29, 29)
dat <- cbind(V1, V2, V3, V4, V5, V6, V7)
dat <- as.data.frame(dat)
dat$V1 <- as.character(dat$V1)
dat$V2 <- as.integer(as.character(dat$V2))
dat$V3 <- as.numeric(as.character(dat$V3))
dat$V4 <- as.numeric(as.character(dat$V4))
dat$V5 <- as.numeric(as.character(dat$V5))
dat$V6 <- as.numeric(as.character(dat$V6))
dat$V7 <- as.numeric(as.character(dat$V7))
str(dat)
dat <- dat %>% rename(author = "V1",
????????????????????? year = "V2",
????????????????????? estimate = "V3",
????????????????????? std_error = "V4",
????????????????????? p_value = "V5",
????????????????????? r = "V6",
????????????????????? number_of_obs = "V7")
for (i in 1:nrow(dat)){
? if (is.na(dat$std_error[i]) == TRUE ){
??? dat$std_error[i] <- abs(dat$estimate[i]) / qt(dat$p_value[i]/2,
??????????????????????????????????????????????????????????? df=dat$number_of
_obs-2, lower.tail=FALSE)
? }
}
res <- rma.mv(yi = dat$estimate, V = (dat$std_error)**2, random = ~ 1 |
author, data=dat)
coef_test(res, vcov="CR2")
forest(res, addcred = TRUE, showweights = TRUE, header = TRUE,
?????? order = "obs", col = "blue", slab = dat$author)
funnel(res)
ranktest(res)
The for loop that I use (and in which I use a formula that Dr Viechtbauer
gave me in a previous answer) to calculate the standard errors for estimates
that I didn't calculate myself gives me an error message, but still gives me
values.
The error message is :
Warning messages:
1: In data$std_error[i] <- abs(data$estimate[i])/qt(data$p_value[i]/2, :
number of items to replace is not a multiple of replacement length
and it is repeated on 7 lines.
I am not entirely sure, but I think this comes from the fact that values are
set to NAs before the loop is run and are replaced by values, so replacing
an item of length 0 with an item of length 1 I believe.
I hope this example can be run simply with a copy / paste, I think it
should.
Did I do things correctly ? If not what should I modify ?
Thank you !
Norman
1 day later
Dear all, dr. Viechtbauer,than you very much for your answer ! I hadn't come across that article, no. This, combined with your answer itself, was very helpful.If the real weights attributed to estimates are obtained that way, then what do the weights that appear when using the showweights parameter of the forest() function represent ?Thank you !Norman ----- Mail d'origine ----- De: Viechtbauer, Wolfgang (SP) <wolfgang.viechtbauer at maastrichtuniversity.nl> ?: Norman DAURELLE <norman.daurelle at agroparistech.fr> Cc: r-sig-meta-analysis <r-sig-meta-analysis at r-project.org> Envoy?: Mon, 15 Jun 2020 13:52:16 +0200 (CEST) Objet: RE: [R-meta] weight in rmv metafor Dear Normand, Did you read this? http://www.metafor-project.org/doku.php/tips:weights_in_rma.mv_models It answers 1) and 2). Let's take an even simpler example: library(metafor) dat <- data.frame(study = c(1,1,2,2,3,4), id = 1:6, yi = c(.1,.3,.2,.4,.6,.8), vi = rep(.01,6)) dat # studies 1 and 2 contribute 2 estimates each, studies 3 and 4 a single estimate res <- rma.mv(yi, vi, random = ~ 1 | study/id, data=dat) res weights(res) The output of this is:
weights(res)
1 2 3 4 5 6 20.485124 20.485124 20.485124 20.485124 9.029753 9.029753 So maybe this is what you are observing - that the values along the diagonal of the weight matrix are larger for the studies with 2 estimates. But those weights are just based on the diagonal of the weight matrix. One really needs to take the whole weight matrix into consideration. As described on that page, the actual weights assigned to the estimates when computing the weight average of the estimates are the row sums of the weight matrix. You can get this with: W <- weights(res, type="matrix") rowSums(W) / sum(W) * 100 This yields:
rowSums(W) / sum(W) * 100
1 2 3 4 5 6 13.34116 13.34116 13.34116 13.34116 23.31768 23.31768 So actually, the weight assigned to the first and second estimate of the studies with 2 estimates is smaller than the weight assigned to the single estimate of the studies that contribute a single estimate. Not sure what the problem is with 3) but I don't think your data are needed to clarify the issue. Best, Wolfgang
-----Original Message----- From: Norman DAURELLE [mailto:norman.daurelle at agroparistech.fr] Sent: Monday, 15 June, 2020 10:10 To: Viechtbauer, Wolfgang (SP) Cc: r-sig-meta-analysis Subject: RE: [R-meta] weight in rmv metafor Dear all, Dr Viechtbauer, Dr Del Ponte, I read what you suggested, and went through the archives of the mailing list, and even though some threads were dealing with similar questions, I didn't really find the answers I was looking for there. I have multiple questions : 1) How exactly is calculated the weight of each estimate in the function rma.mv() ? 2) (this question might be answered too when answering the first one but...) Why does that function (rma.mv) attribute more weight to estimates coming from studies that have multiple estimates relatively to ones that come from studies that only have one estimate ? 3) I tried to produce a reproducible example of my code, would it be possible to tell me if I got things right ? my dataset is this table (I didn't know if I could attach anything) :
dat
author year estimate std_error p_value r
number_of_obs
1 Ballinger et al 1988 -19.3450 5.8220 4.638e-03 -0.6511331
17
2 Matthieu & Norman 2020 -2.7398 0.8103 7.424e-04 -0.0920000
1352
3 Khangura 2011 -1.9610 1.3470 1.536e-01 -0.2298655
40
4 Kutcher 1990 -21.1350 3.9330 1.420e-05 -0.7321321
27
5 Steed 2007 -13.9700 3.2500 7.355e-04 -0.7542836
16
6 Steed 2007 -31.9440 4.7650 2.765e-04 -0.9301781
9
7 Steed 2007 -4.2780 3.0940 1.883e-01 -0.3466844
16
8 Sprague et al 2010 -21.9880 5.3010 7.562e-04 -0.7198487
18
9 Upadhaya et al 2019 -43.6170 3.3440 1.133e-06 -0.9772861
10
10 Khangura et al 2005 -5.3500 NA 5.000e-02 -0.4200000
29
11 Khangura et al 2005 -10.5600 NA 1.000e-03 -0.5700000
29
12 Khangura et al 2005 -9.9700 NA 1.300e-02 -0.4600000
29
13 Khangura et al 2005 -5.4500 NA 1.100e-02 -0.4700000
29
14 Khangura et al 2005 -22.7500 NA 2.800e-02 -0.4200000
29
15 Khangura et al 2005 -16.8300 NA 2.200e-02 -0.4300000
29
16 Khangura et al 2005 -9.2100 NA 3.900e-02 -0.3900000
29
V1 <- c("Ballinger et al", "Matthieu & Norman", "Khangura", "Kutcher",
"Steed", "Steed", "Steed", "Sprague et al",
"Upadhaya et al", "Khangura et al", "Khangura et al", "Khangura et
al", "Khangura et al", "Khangura et al",
"Khangura et al", "Khangura et al" )
V2 <- c(1988, 2020, 2011, 1990, 2007, 2007, 2007, 2010, 2019, 2005, 2005,
2005, 2005, 2005, 2005, 2005)
V3 <- c(-19.3450, -2.7398, -1.9610, -21.1350, -13.9700, -31.9440, -4.2780, -
21.9880, -43.6170, -5.3500,
-10.5600, -9.9700, -5.4500, -22.7500, -16.8300, -9.2100)
V4 <- c(5.8220, 0.8103, 1.3470, 3.9330, 3.2500, 4.7650, 3.0940, 5.3010,
3.3440, NA, NA, NA, NA, NA, NA, NA)
V5 <- c(4.638e-03, 7.424e-04, 1.536e-01, 1.420e-05, 7.355e-04, 2.765e-04,
1.883e-01, 7.562e-04, 1.133e-06,
5.000e-02, 1.000e-03, 1.300e-02, 1.100e-02, 2.800e-02, 2.200e-02,
3.900e-02)
V6 <- c(-0.6511331, -0.0920000, -0.2298655, -0.7321321, -0.7542836, -
0.9301781, -0.3466844, -0.7198487,
-0.9772861, -0.4200000, -0.5700000, -0.4600000, -0.4700000, -
0.4200000, -0.4300000, -0.3900000)
V7 <- c(17, 1352, 40, 27, 16, 9, 16, 18, 10, 29, 29, 29, 29, 29, 29, 29)
dat <- cbind(V1, V2, V3, V4, V5, V6, V7)
dat <- as.data.frame(dat)
dat$V1 <- as.character(dat$V1)
dat$V2 <- as.integer(as.character(dat$V2))
dat$V3 <- as.numeric(as.character(dat$V3))
dat$V4 <- as.numeric(as.character(dat$V4))
dat$V5 <- as.numeric(as.character(dat$V5))
dat$V6 <- as.numeric(as.character(dat$V6))
dat$V7 <- as.numeric(as.character(dat$V7))
str(dat)
dat <- dat %>% rename(author = "V1",
year = "V2",
estimate = "V3",
std_error = "V4",
p_value = "V5",
r = "V6",
number_of_obs = "V7")
for (i in 1:nrow(dat)){
if (is.na(dat$std_error[i]) == TRUE ){
dat$std_error[i] <- abs(dat$estimate[i]) / qt(dat$p_value[i]/2,
df=dat$number_of
_obs-2, lower.tail=FALSE)
}
}
res <- rma.mv(yi = dat$estimate, V = (dat$std_error)**2, random = ~ 1 |
author, data=dat)
coef_test(res, vcov="CR2")
forest(res, addcred = TRUE, showweights = TRUE, header = TRUE,
order = "obs", col = "blue", slab = dat$author)
funnel(res)
ranktest(res)
The for loop that I use (and in which I use a formula that Dr Viechtbauer
gave me in a previous answer) to calculate the standard errors for estimates
that I didn't calculate myself gives me an error message, but still gives me
values.
The error message is :
Warning messages:
1: In data$std_error[i] <- abs(data$estimate[i])/qt(data$p_value[i]/2, :
number of items to replace is not a multiple of replacement length
and it is repeated on 7 lines.
I am not entirely sure, but I think this comes from the fact that values are
set to NAs before the loop is run and are replaced by values, so replacing
an item of length 0 with an item of length 1 I believe.
I hope this example can be run simply with a copy / paste, I think it
should.
Did I do things correctly ? If not what should I modify ?
Thank you !
Norman
2 days later
When showweights=TRUE in forest(x) where 'x' is an 'rma.mv' object, the weights are based on the diagonal of the weight matrix (same as when using weights(x)). Best, Wolfgang
-----Original Message----- From: Norman DAURELLE [mailto:norman.daurelle at agroparistech.fr] Sent: Wednesday, 17 June, 2020 3:50 To: Viechtbauer, Wolfgang (SP) Cc: r-sig-meta-analysis Subject: RE: [R-meta] weight in rmv metafor Dear all, dr. Viechtbauer, than you very much for your answer ! I hadn't come across that article, no. This, combined with your answer itself, was very helpful. If the real weights attributed to estimates are obtained that way, then what do the weights that appear when using the showweights parameter of the forest() function represent ? Thank you ! Norman ----- Mail d'origine ----- De: Viechtbauer, Wolfgang (SP) <wolfgang.viechtbauer at maastrichtuniversity.nl> ?: Norman DAURELLE <norman.daurelle at agroparistech.fr> Cc: r-sig-meta-analysis <r-sig-meta-analysis at r-project.org> Envoy?: Mon, 15 Jun 2020 13:52:16 +0200 (CEST) Objet: RE: [R-meta] weight in rmv metafor Dear Normand, Did you read this? http://www.metafor-project.org/doku.php/tips:weights_in_rma.mv_models It answers 1) and 2). Let's take an even simpler example: library(metafor) dat <- data.frame(study = c(1,1,2,2,3,4), id = 1:6, yi = c(.1,.3,.2,.4,.6,.8), vi = rep(.01,6)) dat # studies 1 and 2 contribute 2 estimates each, studies 3 and 4 a single estimate res <- rma.mv(yi, vi, random = ~ 1 | study/id, data=dat) res weights(res) The output of this is:
weights(res)
1 2 3 4 5 6 20.485124 20.485124 20.485124 20.485124 9.029753 9.029753 So maybe this is what you are observing - that the values along the diagonal of the weight matrix are larger for the studies with 2 estimates. But those weights are just based on the diagonal of the weight matrix. One really needs to take the whole weight matrix into consideration. As described on that page, the actual weights assigned to the estimates when computing the weight average of the estimates are the row sums of the weight matrix. You can get this with: W <- weights(res, type="matrix") rowSums(W) / sum(W) * 100 This yields:
rowSums(W) / sum(W) * 100
1 2 3 4 5 6 13.34116 13.34116 13.34116 13.34116 23.31768 23.31768 So actually, the weight assigned to the first and second estimate of the studies with 2 estimates is smaller than the weight assigned to the single estimate of the studies that contribute a single estimate. Not sure what the problem is with 3) but I don't think your data are needed to clarify the issue. Best, Wolfgang
-----Original Message----- From: Norman DAURELLE [mailto:norman.daurelle at agroparistech.fr] Sent: Monday, 15 June, 2020 10:10 To: Viechtbauer, Wolfgang (SP) Cc: r-sig-meta-analysis Subject: RE: [R-meta] weight in rmv metafor Dear all, Dr Viechtbauer, Dr Del Ponte, I read what you suggested, and went through the archives of the mailing list, and even though some threads were dealing with similar questions, I didn't really find the answers I was looking for there. I have multiple questions : 1) How exactly is calculated the weight of each estimate in the function rma.mv() ? 2) (this question might be answered too when answering the first one
but...)
Why does that function (rma.mv) attribute more weight to estimates coming from studies that have multiple estimates relatively to ones that come from studies that only have one estimate ? 3) I tried to produce a reproducible example of my code, would it be possible to tell me if I got things right ? my dataset is this table (I didn't know if I could attach anything) :
dat
author year estimate std_error p_value r
number_of_obs
1 Ballinger et al 1988 -19.3450 5.8220 4.638e-03 -0.6511331
17
2 Matthieu & Norman 2020 -2.7398 0.8103 7.424e-04 -0.0920000
1352
3 Khangura 2011 -1.9610 1.3470 1.536e-01 -0.2298655
40
4 Kutcher 1990 -21.1350 3.9330 1.420e-05 -0.7321321
27
5 Steed 2007 -13.9700 3.2500 7.355e-04 -0.7542836
16
6 Steed 2007 -31.9440 4.7650 2.765e-04 -0.9301781
9
7 Steed 2007 -4.2780 3.0940 1.883e-01 -0.3466844
16
8 Sprague et al 2010 -21.9880 5.3010 7.562e-04 -0.7198487
18
9 Upadhaya et al 2019 -43.6170 3.3440 1.133e-06 -0.9772861
10
10 Khangura et al 2005 -5.3500 NA 5.000e-02 -0.4200000
29
11 Khangura et al 2005 -10.5600 NA 1.000e-03 -0.5700000
29
12 Khangura et al 2005 -9.9700 NA 1.300e-02 -0.4600000
29
13 Khangura et al 2005 -5.4500 NA 1.100e-02 -0.4700000
29
14 Khangura et al 2005 -22.7500 NA 2.800e-02 -0.4200000
29
15 Khangura et al 2005 -16.8300 NA 2.200e-02 -0.4300000
29
16 Khangura et al 2005 -9.2100 NA 3.900e-02 -0.3900000
29
V1 <- c("Ballinger et al", "Matthieu & Norman", "Khangura", "Kutcher",
"Steed", "Steed", "Steed", "Sprague et al",
??????? "Upadhaya et al", "Khangura et al", "Khangura et al", "Khangura et
al", "Khangura et al", "Khangura et al",
??????? "Khangura et al", "Khangura et al" )
V2 <- c(1988, 2020, 2011, 1990, 2007, 2007, 2007, 2010, 2019, 2005, 2005,
2005, 2005, 2005, 2005, 2005)
V3 <- c(-19.3450, -2.7398, -1.9610, -21.1350, -13.9700, -31.9440, -4.2780,
-
21.9880, -43.6170, -5.3500,
??????? -10.5600, -9.9700, -5.4500, -22.7500, -16.8300, -9.2100)
V4 <- c(5.8220, 0.8103, 1.3470, 3.9330, 3.2500, 4.7650, 3.0940, 5.3010,
3.3440, NA, NA, NA, NA, NA, NA, NA)
V5 <- c(4.638e-03, 7.424e-04, 1.536e-01, 1.420e-05, 7.355e-04, 2.765e-04,
1.883e-01, 7.562e-04, 1.133e-06,
??????? 5.000e-02, 1.000e-03, 1.300e-02, 1.100e-02, 2.800e-02, 2.200e-02,
3.900e-02)
V6 <- c(-0.6511331, -0.0920000, -0.2298655, -0.7321321, -0.7542836, -
0.9301781, -0.3466844, -0.7198487,
??????? -0.9772861, -0.4200000, -0.5700000, -0.4600000, -0.4700000, -
0.4200000, -0.4300000, -0.3900000)
V7 <- c(17, 1352, 40, 27, 16, 9, 16, 18, 10, 29, 29, 29, 29, 29, 29, 29)
dat <- cbind(V1, V2, V3, V4, V5, V6, V7)
dat <- as.data.frame(dat)
dat$V1 <- as.character(dat$V1)
dat$V2 <- as.integer(as.character(dat$V2))
dat$V3 <- as.numeric(as.character(dat$V3))
dat$V4 <- as.numeric(as.character(dat$V4))
dat$V5 <- as.numeric(as.character(dat$V5))
dat$V6 <- as.numeric(as.character(dat$V6))
dat$V7 <- as.numeric(as.character(dat$V7))
str(dat)
dat <- dat %>% rename(author = "V1",
????????????????????? year = "V2",
????????????????????? estimate = "V3",
????????????????????? std_error = "V4",
????????????????????? p_value = "V5",
????????????????????? r = "V6",
????????????????????? number_of_obs = "V7")
for (i in 1:nrow(dat)){
? if (is.na(dat$std_error[i]) == TRUE ){
??? dat$std_error[i] <- abs(dat$estimate[i]) / qt(dat$p_value[i]/2,
??????????????????????????????????????????????????????????? df=dat$number_o
f
_obs-2, lower.tail=FALSE) ? } } res <- rma.mv(yi = dat$estimate, V = (dat$std_error)**2, random = ~ 1 | author, data=dat) coef_test(res, vcov="CR2") forest(res, addcred = TRUE, showweights = TRUE, header = TRUE, ?????? order = "obs", col = "blue", slab = dat$author) funnel(res) ranktest(res) The for loop that I use (and in which I use a formula that Dr Viechtbauer gave me in a previous answer) to calculate the standard errors for
estimates
that I didn't calculate myself gives me an error message, but still gives
me
values. The error message is : Warning messages: 1: In data$std_error[i] <- abs(data$estimate[i])/qt(data$p_value[i]/2, : number of items to replace is not a multiple of replacement length and it is repeated on 7 lines. I am not entirely sure, but I think this comes from the fact that values
are
set to NAs before the loop is run and are replaced by values, so replacing an item of length 0 with an item of length 1 I believe. I hope this example can be run simply with a copy / paste, I think it should. Did I do things correctly ? If not what should I modify ? Thank you ! Norman