Hello, I am trying to run a cluster wild bootstrap, but am getting the following error: Error in constrain_zero(constraints = constraints, coefs = coefs) : Constraint indices must be less than or equal to 1. Question: What does this mean? Thankyou, Brendan Background (if relevant): I am running a purely exploratory series of meta-analyses of the relationships between several predictors and outcomes (i.e. n x m meta-analyses). There is non-independence within each predictor-relationship pair (some studies report multiple effect sizes for the same group of participants) and the effect sizes are nested. I am following the general workflow outlined here: (https://wviechtb.github.io/metafor/reference/misc-recs.html#general-workflow-for-meta-analyses-involving-complex-dependency-structures) and want to use cluster wild bootstrapping because some analyses have very few studies (and cluster-robust inference methods led to very wide confidence intervals) Minimal working example: ```{r} dat_temp_mwe <- structure(list(study = c("A", "B", "C", "D", "E", "E", "F", "F", "F", "F", "G"), effect_id = c(11, 28, 73, 93, 115, 232, 236, 242, 252, 266, 284), Paper = c("AA", "BB", "CC", "DD", "EE1", "EE2", "FF1", "FF2", "FF3", "FF4", "GG"), Mean_age_when_outcome_measured = c(21, 19, 26, 19, 19, 21, 21, 21, 21, 19, 19), yi = structure(c(-0.0401817896328312, -0.0700000000000002, -0.151002873536528, -0.113328685307003, -0.139761942375159, -0.0392207131532808, -0.0487901641694324, -0.05, -0.041141943331175, -0.0421011760186351, -0.604315966853329 ), ni = c(1566, 844, 624, 355, 7449, 2135, 2410, 4853, 6912, 7842, 1202), measure = "GEN"), vi = c(0.00014647659162424, 0.000527143487544687, 0.00336452354486442, 0.00116040765035603, 0.00667694039383453, 9.6295107522168e-05, 9.44692075770055e-05, 0.000100003675148229, 2.50009187870589e-05, 2.50009187870581e-05, 0.0292124283937479 )), row.names = c(NA, 11L), class = c("escalc", "data.frame"), yi.names = "yi", vi.names = "vi", digits = c(est = 4, se = 4, test = 4, pval = 4, ci = 4, var = 4, sevar = 4, fit = 4, het = 4)) V <- vcalc(vi, cluster = study, obs = effect_id, time1 = Mean_age_when_outcome_measured, data = dat_temp_mwe, rho = 0.8, phi = 0.9) meta_analysis_output <- rma.mv( yi, V = V, random = ~ 1 | study / Paper / effect_id, data = dat_temp_mwe, control = list(rel.tol = 1e-8)) Wald_test_cwb(full_model = meta_analysis_robust, constraints = constrain_equal(1:3), R = 99, seed = 20201229) ```
[R-meta] Constraint rrror when using Wald_test_cwb
8 messages · Michael Dewey, Pearl, Brendan, Wolfgang Viechtbauer +1 more
Dear Brendan When I run your MWE (after inserting library(metafor) I get Error in Wald_test_cwb(full_model = meta_analysis_robust, constraints = constrain_equal(1:3), : could not find function "Wald_test_cwb" Michael
On 14/09/2024 10:28, Pearl, Brendan via R-sig-meta-analysis wrote:
Hello, I am trying to run a cluster wild bootstrap, but am getting the following error: Error in constrain_zero(constraints = constraints, coefs = coefs) : Constraint indices must be less than or equal to 1. Question: What does this mean? Thankyou, Brendan Background (if relevant): I am running a purely exploratory series of meta-analyses of the relationships between several predictors and outcomes (i.e. n x m meta-analyses). There is non-independence within each predictor-relationship pair (some studies report multiple effect sizes for the same group of participants) and the effect sizes are nested. I am following the general workflow outlined here: (https://wviechtb.github.io/metafor/reference/misc-recs.html#general-workflow-for-meta-analyses-involving-complex-dependency-structures) and want to use cluster wild bootstrapping because some analyses have very few studies (and cluster-robust inference methods led to very wide confidence intervals) Minimal working example: ```{r} dat_temp_mwe <- structure(list(study = c("A", "B", "C", "D", "E", "E", "F", "F", "F", "F", "G"), effect_id = c(11, 28, 73, 93, 115, 232, 236, 242, 252, 266, 284), Paper = c("AA", "BB", "CC", "DD", "EE1", "EE2", "FF1", "FF2", "FF3", "FF4", "GG"), Mean_age_when_outcome_measured = c(21, 19, 26, 19, 19, 21, 21, 21, 21, 19, 19), yi = structure(c(-0.0401817896328312, -0.0700000000000002, -0.151002873536528, -0.113328685307003, -0.139761942375159, -0.0392207131532808, -0.0487901641694324, -0.05, -0.041141943331175, -0.0421011760186351, -0.604315966853329 ), ni = c(1566, 844, 624, 355, 7449, 2135, 2410, 4853, 6912, 7842, 1202), measure = "GEN"), vi = c(0.00014647659162424, 0.000527143487544687, 0.00336452354486442, 0.00116040765035603, 0.00667694039383453, 9.6295107522168e-05, 9.44692075770055e-05, 0.000100003675148229, 2.50009187870589e-05, 2.50009187870581e-05, 0.0292124283937479 )), row.names = c(NA, 11L), class = c("escalc", "data.frame"), yi.names = "yi", vi.names = "vi", digits = c(est = 4, se = 4, test = 4, pval = 4, ci = 4, var = 4, sevar = 4, fit = 4, het = 4)) V <- vcalc(vi, cluster = study, obs = effect_id, time1 = Mean_age_when_outcome_measured, data = dat_temp_mwe, rho = 0.8, phi = 0.9) meta_analysis_output <- rma.mv( yi, V = V, random = ~ 1 | study / Paper / effect_id, data = dat_temp_mwe, control = list(rel.tol = 1e-8)) Wald_test_cwb(full_model = meta_analysis_robust, constraints = constrain_equal(1:3), R = 99, seed = 20201229) ```
_______________________________________________ R-sig-meta-analysis mailing list @ R-sig-meta-analysis at r-project.org To manage your subscription to this mailing list, go to: https://stat.ethz.ch/mailman/listinfo/r-sig-meta-analysis
Michael
Wald_test_cwb() is for testing null hypotheses specified by a constraint or constraints on the model coefficients. In your MWE, you fit a summary meta-analysis with only one beta coefficient, so the constraints can only refer to that first coefficient (hence ?Constraint indices must be less than or equal to 1?). As an aside, note that the same error would occur if you use the more basic Wald_test() from clubSandwich. James
On Sep 14, 2024, at 7:32?AM, Michael Dewey via R-sig-meta-analysis <r-sig-meta-analysis at r-project.org> wrote: ?Dear Brendan When I run your MWE (after inserting library(metafor) I get Error in Wald_test_cwb(full_model = meta_analysis_robust, constraints = constrain_equal(1:3), : could not find function "Wald_test_cwb" Michael
On 14/09/2024 10:28, Pearl, Brendan via R-sig-meta-analysis wrote: Hello, I am trying to run a cluster wild bootstrap, but am getting the following error: Error in constrain_zero(constraints = constraints, coefs = coefs) : Constraint indices must be less than or equal to 1. Question: What does this mean? Thankyou, Brendan Background (if relevant): I am running a purely exploratory series of meta-analyses of the relationships between several predictors and outcomes (i.e. n x m meta-analyses). There is non-independence within each predictor-relationship pair (some studies report multiple effect sizes for the same group of participants) and the effect sizes are nested. I am following the general workflow outlined here: (https://wviechtb.github.io/metafor/reference/misc-recs.html#general-workflow-for-meta-analyses-involving-complex-dependency-structures) and want to use cluster wild bootstrapping because some analyses have very few studies (and cluster-robust inference methods led to very wide confidence intervals) Minimal working example: ```{r} dat_temp_mwe <- structure(list(study = c("A", "B", "C", "D", "E", "E", "F", "F", "F", "F", "G"), effect_id = c(11, 28, 73, 93, 115, 232, 236, 242, 252, 266, 284), Paper = c("AA", "BB", "CC", "DD", "EE1", "EE2", "FF1", "FF2", "FF3", "FF4", "GG"), Mean_age_when_outcome_measured = c(21, 19, 26, 19, 19, 21, 21, 21, 21, 19, 19), yi = structure(c(-0.0401817896328312, -0.0700000000000002, -0.151002873536528, -0.113328685307003, -0.139761942375159, -0.0392207131532808, -0.0487901641694324, -0.05, -0.041141943331175, -0.0421011760186351, -0.604315966853329 ), ni = c(1566, 844, 624, 355, 7449, 2135, 2410, 4853, 6912, 7842, 1202), measure = "GEN"), vi = c(0.00014647659162424, 0.000527143487544687, 0.00336452354486442, 0.00116040765035603, 0.00667694039383453, 9.6295107522168e-05, 9.44692075770055e-05, 0.000100003675148229, 2.50009187870589e-05, 2.50009187870581e-05, 0.0292124283937479 )), row.names = c(NA, 11L), class = c("escalc", "data.frame"), yi.names = "yi", vi.names = "vi", digits = c(est = 4, se = 4, test = 4, pval = 4, ci = 4, var = 4, sevar = 4, fit = 4, het = 4)) V <- vcalc(vi, cluster = study, obs = effect_id, time1 = Mean_age_when_outcome_measured, data = dat_temp_mwe, rho = 0.8, phi = 0.9) meta_analysis_output <- rma.mv( yi, V = V, random = ~ 1 | study / Paper / effect_id, data = dat_temp_mwe, control = list(rel.tol = 1e-8)) Wald_test_cwb(full_model = meta_analysis_robust, constraints = constrain_equal(1:3), R = 99, seed = 20201229) ```
_______________________________________________ R-sig-meta-analysis mailing list @ R-sig-meta-analysis at r-project.org To manage your subscription to this mailing list, go to: https://stat.ethz.ch/mailman/listinfo/r-sig-meta-analysis
-- Michael
_______________________________________________ R-sig-meta-analysis mailing list @ R-sig-meta-analysis at r-project.org To manage your subscription to this mailing list, go to: https://stat.ethz.ch/mailman/listinfo/r-sig-meta-analysis
Hi Michael, Thankyou, I forgot to include libraries in my minimal working example: Wald_test_cwb is from the "wildmeta" library. Brendan.
From: Michael Dewey <lists at dewey.myzen.co.uk>
Sent: Saturday, 14 September 2024 10:31 PM
To: R Special Interest Group for Meta-Analysis
Cc: Pearl, Brendan
Subject: Re: [R-meta] Constraint rrror when using Wald_test_cwb
Sent: Saturday, 14 September 2024 10:31 PM
To: R Special Interest Group for Meta-Analysis
Cc: Pearl, Brendan
Subject: Re: [R-meta] Constraint rrror when using Wald_test_cwb
Dear Brendan When I run your MWE (after inserting library(metafor) I get Error in Wald_test_cwb(full_model = meta_analysis_robust, constraints = constrain_equal(1:3), : could not find function "Wald_test_cwb" Michael On 14/09/2024 10:28, Pearl, Brendan via R-sig-meta-analysis wrote: > Hello, > > I am trying to run a cluster wild bootstrap, but am getting the following error: > > Error in constrain_zero(constraints = constraints, coefs = coefs) : > Constraint indices must be less than or equal to 1. > > Question: What does this mean? > > Thankyou, > Brendan > > Background (if relevant): > I am running a purely exploratory series of meta-analyses of the relationships between several predictors and outcomes (i.e. n x m meta-analyses). > There is non-independence within each predictor-relationship pair (some studies report multiple effect sizes for the same group of participants) and the effect sizes are nested. > I am following the general workflow outlined here: (https://secure-web.cisco.com/1Wf4f-jedEIE5EB_fzQIR71_I5vn5cOAd03FIfs6ziGFAeIRJiW9Y6hoFCiTo3oSf7SLnsKpjkOngQBCHsqXfBsZql9YxdOKglwE6Wdazj0FP87CUV-C_xxu7uOvGUV3TI6-Qu5Xq4Ma8_vSP9S1ulcRnGlmcfFGCjUkrwpegw6Axxp4CtJ_owNpI7MoxyJEsbHtLi6Zmd4hM0j2hxSWFb4ADhP0UzZXlw68VLRVR5wyXel-mUt5VX9xMdjd-b2cNMbkU4i-aCCBiZc_r3YkiVAMIixbO_YGPs6KIvdKzvgcpkuwowI-jpDoNvRTRdyQ3/https%3A%2F%2Fwviechtb.github.io%2Fmetafor%2Freference%2Fmisc-recs.html%23general-workflow-for-meta-analyses-involving-complex-dependency-structures) and want to use cluster wild bootstrapping because some analyses have very few studies (and cluster-robust inference methods led to very wide confidence intervals) > > Minimal working example: > > ```{r} > dat_temp_mwe <- structure(list(study = c("A", "B", "C", "D", "E", "E", "F", "F", > "F", "F", "G"), effect_id = c(11, 28, 73, 93, 115, 232, 236, > 242, 252, 266, 284), Paper = c("AA", "BB", "CC", "DD", "EE1", > "EE2", "FF1", "FF2", "FF3", "FF4", "GG"), Mean_age_when_outcome_measured = c(21, > 19, 26, 19, 19, 21, 21, 21, 21, 19, 19), yi = structure(c(-0.0401817896328312, > -0.0700000000000002, -0.151002873536528, -0.113328685307003, > -0.139761942375159, -0.0392207131532808, -0.0487901641694324, > -0.05, -0.041141943331175, -0.0421011760186351, -0.604315966853329 > ), ni = c(1566, 844, 624, 355, 7449, 2135, 2410, 4853, 6912, > 7842, 1202), measure = "GEN"), vi = c(0.00014647659162424, 0.000527143487544687, > 0.00336452354486442, 0.00116040765035603, 0.00667694039383453, > 9.6295107522168e-05, 9.44692075770055e-05, 0.000100003675148229, > 2.50009187870589e-05, 2.50009187870581e-05, 0.0292124283937479 > )), row.names = c(NA, 11L), class = c("escalc", "data.frame"), yi.names = "yi", vi.names = "vi", digits = c(est = 4, > se = 4, test = 4, pval = 4, ci = 4, var = 4, sevar = 4, fit = 4, > het = 4)) > > V <- vcalc(vi, > cluster = study, > obs = effect_id, > time1 = Mean_age_when_outcome_measured, > data = dat_temp_mwe, > rho = 0.8, > phi = 0.9) > > meta_analysis_output <- rma.mv( > yi, > V = V, > random = ~ 1 | study / Paper / effect_id, > data = dat_temp_mwe, > control = list(rel.tol = 1e-8)) > > Wald_test_cwb(full_model = meta_analysis_robust, > constraints = constrain_equal(1:3), > R = 99, > seed = 20201229) > ``` > _______________________________________________ > R-sig-meta-analysis mailing list @ R-sig-meta-analysis at r-project.org > To manage your subscription to this mailing list, go to: > https://secure-web.cisco.com/17sVo0GDfNFNsXU-XVo1ubsXLJ0QAwCEGoUNeiumjlTnwaokK9sEDq9Vtl3xrVAj2JmZ5fs5BC2bngYMV5_AP6qxQZ5e7dDGdDzXCZAebawfL4P0pvS2qPuFsZMFIkq6fRz_L_Mt-HBixMm0VKFFa4P4dj8Hgpb6dx_4ctmp62XbGpbmTEmxI_b56ajIT3Do7RrzdxCj6rPTs1fYsXUUc9fLNClOMNpDq1OlHPXTV8qsShbncuhJx_edIbn_GATkvWBkBuS3cXtlEqFwdIDkea6joV1x4-J4CccfBX8gOdZF3QYBp7kbh4U6qNP_XPRJK/https%3A%2F%2Fstat.ethz.ch%2Fmailman%2Flistinfo%2Fr-sig-meta-analysis > -- Michael ------------------------------------------------------------------------------ WARNING: This message originated from outside the Northern/Melbourne/Western Health e-mail network. The sender cannot be validated. Caution is advised. Contact IT Services for more information.
Hi James, Thankyou for this. For anyone else who stumbles on this question, I also found that James answered it here: https://github.com/meghapsimatrix/wildmeta/issues/17#issuecomment-1833956635
From: R-sig-meta-analysis <r-sig-meta-analysis-bounces at r-project.org> on behalf of James Pustejovsky via R-sig-meta-analysis <r-sig-meta-analysis at r-project.org>
Sent: Saturday, 14 September 2024 10:54:55 PM
To: R Special Interest Group for Meta-Analysis
Cc: James Pustejovsky
Subject: Re: [R-meta] Constraint rrror when using Wald_test_cwb
Sent: Saturday, 14 September 2024 10:54:55 PM
To: R Special Interest Group for Meta-Analysis
Cc: James Pustejovsky
Subject: Re: [R-meta] Constraint rrror when using Wald_test_cwb
Wald_test_cwb() is for testing null hypotheses specified by a constraint or constraints on the model coefficients. In your MWE, you fit a summary meta-analysis with only one beta coefficient, so the constraints can only refer to that first coefficient (hence ?Constraint indices must be less than or equal to 1?). As an aside, note that the same error would occur if you use the more basic Wald_test() from clubSandwich. James > On Sep 14, 2024, at 7:32?AM, Michael Dewey via R-sig-meta-analysis <r-sig-meta-analysis at r-project.org> wrote: > > Dear Brendan > > When I run your MWE (after inserting library(metafor) I get > > Error in Wald_test_cwb(full_model = meta_analysis_robust, constraints = constrain_equal(1:3), : > could not find function "Wald_test_cwb" > > Michael > > >> On 14/09/2024 10:28, Pearl, Brendan via R-sig-meta-analysis wrote: >> Hello, >> I am trying to run a cluster wild bootstrap, but am getting the following error: >> Error in constrain_zero(constraints = constraints, coefs = coefs) : >> Constraint indices must be less than or equal to 1. >> Question: What does this mean? >> Thankyou, >> Brendan >> Background (if relevant): >> I am running a purely exploratory series of meta-analyses of the relationships between several predictors and outcomes (i.e. n x m meta-analyses). >> There is non-independence within each predictor-relationship pair (some studies report multiple effect sizes for the same group of participants) and the effect sizes are nested. >> I am following the general workflow outlined here: (https://secure-web.cisco.com/1lrrq_D6oH_TZqzjB9PIG4QJ06YOloDP4ceXqWQyIiIEIpUiRZWJthhiYddbVPR3II4QVSfSBTeB9DZIDLwZ3fDgUcO-uGeUCx2EUOy13zG9-gKMu1Ye_XcvaD2xMO9wlRFE-6y-GLhuaxpLJlJtLjzcjUUr91DF_tHZlI1xWTZSEJMEeQZHNih3uaT39WDJZcdBwfsptGFJp_yoggezYdv0zriH4ZC5g9dW84k0Ac3JH9T7XFY6P6coLQzeO0bDLWcSykbuH0DRka4uCCJbqb9Y6ET0Eu4hdXVD8YxQoyXJ4VQbshEhGtwo3BS5kFnfR/https%3A%2F%2Fwviechtb.github.io%2Fmetafor%2Freference%2Fmisc-recs.html%23general-workflow-for-meta-analyses-involving-complex-dependency-structures) and want to use cluster wild bootstrapping because some analyses have very few studies (and cluster-robust inference methods led to very wide confidence intervals) >> Minimal working example: >> ```{r} >> dat_temp_mwe <- structure(list(study = c("A", "B", "C", "D", "E", "E", "F", "F", >> "F", "F", "G"), effect_id = c(11, 28, 73, 93, 115, 232, 236, >> 242, 252, 266, 284), Paper = c("AA", "BB", "CC", "DD", "EE1", >> "EE2", "FF1", "FF2", "FF3", "FF4", "GG"), Mean_age_when_outcome_measured = c(21, >> 19, 26, 19, 19, 21, 21, 21, 21, 19, 19), yi = structure(c(-0.0401817896328312, >> -0.0700000000000002, -0.151002873536528, -0.113328685307003, >> -0.139761942375159, -0.0392207131532808, -0.0487901641694324, >> -0.05, -0.041141943331175, -0.0421011760186351, -0.604315966853329 >> ), ni = c(1566, 844, 624, 355, 7449, 2135, 2410, 4853, 6912, >> 7842, 1202), measure = "GEN"), vi = c(0.00014647659162424, 0.000527143487544687, >> 0.00336452354486442, 0.00116040765035603, 0.00667694039383453, >> 9.6295107522168e-05, 9.44692075770055e-05, 0.000100003675148229, >> 2.50009187870589e-05, 2.50009187870581e-05, 0.0292124283937479 >> )), row.names = c(NA, 11L), class = c("escalc", "data.frame"), yi.names = "yi", vi.names = "vi", digits = c(est = 4, >> se = 4, test = 4, pval = 4, ci = 4, var = 4, sevar = 4, fit = 4, >> het = 4)) >> V <- vcalc(vi, >> cluster = study, >> obs = effect_id, >> time1 = Mean_age_when_outcome_measured, >> data = dat_temp_mwe, >> rho = 0.8, >> phi = 0.9) >> meta_analysis_output <- rma.mv( >> yi, >> V = V, >> random = ~ 1 | study / Paper / effect_id, >> data = dat_temp_mwe, >> control = list(rel.tol = 1e-8)) >> Wald_test_cwb(full_model = meta_analysis_robust, >> constraints = constrain_equal(1:3), >> R = 99, >> seed = 20201229) >> ``` >> _______________________________________________ >> R-sig-meta-analysis mailing list @ R-sig-meta-analysis at r-project.org >> To manage your subscription to this mailing list, go to: >> https://secure-web.cisco.com/1TaSljs9k8XAE99XXe0EqCetP2m1xWD-3dWh9TdOwgsdcSZ70djtxo2eW48R-9OzIF4E2Bd1zgf3ECDJ3zadq6rm7g7SiKAbIb7thjMsBZoQ_kUzC1bIN6KFQqECtP8nmD5rjrk-zZHDN-mqCyWspyZp_ni_6KjEfbkeYaSQIc0YPurBXmVkts-Fnx5XJMAG-eflWLdodnGP07YvMPKHU5Q9Z-25qX68HllCCyRGHIlo9uPvc87EgNguQWBC-Dw0DBxMye-uq-ROJhmYSCP4QU6cmD7s5XA1jYI8bX2eKz70OwFDCbnNweqP2-1crD5P_/https%3A%2F%2Fstat.ethz.ch%2Fmailman%2Flistinfo%2Fr-sig-meta-analysis > > -- > Michael > > _______________________________________________ > R-sig-meta-analysis mailing list @ R-sig-meta-analysis at r-project.org > To manage your subscription to this mailing list, go to: > https://secure-web.cisco.com/1TaSljs9k8XAE99XXe0EqCetP2m1xWD-3dWh9TdOwgsdcSZ70djtxo2eW48R-9OzIF4E2Bd1zgf3ECDJ3zadq6rm7g7SiKAbIb7thjMsBZoQ_kUzC1bIN6KFQqECtP8nmD5rjrk-zZHDN-mqCyWspyZp_ni_6KjEfbkeYaSQIc0YPurBXmVkts-Fnx5XJMAG-eflWLdodnGP07YvMPKHU5Q9Z-25qX68HllCCyRGHIlo9uPvc87EgNguQWBC-Dw0DBxMye-uq-ROJhmYSCP4QU6cmD7s5XA1jYI8bX2eKz70OwFDCbnNweqP2-1crD5P_/https%3A%2F%2Fstat.ethz.ch%2Fmailman%2Flistinfo%2Fr-sig-meta-analysis _______________________________________________ R-sig-meta-analysis mailing list @ R-sig-meta-analysis at r-project.org To manage your subscription to this mailing list, go to: https://secure-web.cisco.com/1TaSljs9k8XAE99XXe0EqCetP2m1xWD-3dWh9TdOwgsdcSZ70djtxo2eW48R-9OzIF4E2Bd1zgf3ECDJ3zadq6rm7g7SiKAbIb7thjMsBZoQ_kUzC1bIN6KFQqECtP8nmD5rjrk-zZHDN-mqCyWspyZp_ni_6KjEfbkeYaSQIc0YPurBXmVkts-Fnx5XJMAG-eflWLdodnGP07YvMPKHU5Q9Z-25qX68HllCCyRGHIlo9uPvc87EgNguQWBC-Dw0DBxMye-uq-ROJhmYSCP4QU6cmD7s5XA1jYI8bX2eKz70OwFDCbnNweqP2-1crD5P_/https%3A%2F%2Fstat.ethz.ch%2Fmailman%2Flistinfo%2Fr-sig-meta-analysis ------------------------------------------------------------------------------ WARNING: This message originated from outside the Northern/Melbourne/Western Health e-mail network. The sender cannot be validated. Caution is advised. Contact IT Services for more information.
2 days later
A follow-up question (for James) on my part. I saw this reponse by James: "If you only have 1 coefficient, then you don't need to go to the trouble of cluster wild bootstrapping--you can just use RVE with small sample corrections" (https://github.com/meghapsimatrix/wildmeta/issues/17#issuecomment-1833956635) with respect to a model that includes an intercept and 1 moderator. So, if I understand this correctly, then cluster wild bootstrapping only becomes relevant (i.e., has some advantages over RVE) when the model includes more than 1 moderator? But then I also saw this issue: https://github.com/meghapsimatrix/wildmeta/issues/18 about the possibility to do CWB for a model with just an intercept term. So is CWB is in principle useful for this scenario? And to answer the question in that issue: As was just discussed in another thread, it is possible to fit an rma.mv() model with only an intercept term where the coefficient is constrained to 0, with rma.mv(..., beta=0). This is undocumented, experimental, and implemented right now in a rather crude manner, but I am actually working on rma.mv() right now so that instead of profiling out the fixed effects (and then optimizing only over the variance/correlation components of the model), one can optimize over the fixed effects as well. Then one can also contrain coefficients in meta-regression models to 0. This is already possible with rma() when fitting location-scale models: library(metafor) dat <- dat.bcg dat <- escalc(measure="RR", ai=tpos, bi=tneg, ci=cpos, di=cneg, data=dat) res1 <- rma(yi, vi, mods = ~ 1, scale = ~ 1, data=dat, method="ML", optbeta=TRUE) res0 <- rma(yi, vi, mods = ~ ablat, scale = ~ 1, data=dat, method="ML", optbeta=TRUE, beta=c(NA,0)) res1 res0 fitstats(res0, res1) (with method="REML", leaving out a moderator is not the same as constraining its coefficient to zero, since in REML the model matrix affects the restricted log-likelihood, but under ML as shown above, these two approaches are identical). Best, Wolfgang
-----Original Message----- From: R-sig-meta-analysis <r-sig-meta-analysis-bounces at r-project.org> On Behalf Of Pearl, Brendan via R-sig-meta-analysis Sent: Sunday, September 15, 2024 01:13 To: R Special Interest Group for Meta-Analysis <r-sig-meta-analysis at r- project.org> Cc: Pearl, Brendan <Brendan.Pearl at mh.org.au> Subject: Re: [R-meta] Constraint rrror when using Wald_test_cwb Hi James, Thankyou for this. For anyone else who stumbles on this question, I also found that James answered it here: https://github.com/meghapsimatrix/wildmeta/issues/17#issuecomment- 1833956635
________________________________ From: R-sig-meta-analysis <r-sig-meta-analysis-bounces at r-project.org> on behalf of James Pustejovsky via R-sig-meta-analysis <r-sig-meta-analysis at r-project.org> Sent: Saturday, 14 September 2024 10:54:55 PM To: R Special Interest Group for Meta-Analysis Cc: James Pustejovsky Subject: Re: [R-meta] Constraint rrror when using Wald_test_cwb Wald_test_cwb() is for testing null hypotheses specified by a constraint or constraints on the model coefficients. In your MWE, you fit a summary meta- analysis with only one beta coefficient, so the constraints can only refer to that first coefficient (hence ?Constraint indices must be less than or equal to 1?). As an aside, note that the same error would occur if you use the more basic Wald_test() from clubSandwich. James On Sep 14, 2024, at 7:32?AM, Michael Dewey via R-sig-meta-analysis <r-sig- meta-analysis at r-project.org> wrote: Dear Brendan When I run your MWE (after inserting library(metafor) I get Error in Wald_test_cwb(full_model = meta_analysis_robust, constraints = constrain_equal(1:3), : could not find function "Wald_test_cwb" Michael On 14/09/2024 10:28, Pearl, Brendan via R-sig-meta-analysis wrote: Hello, I am trying to run a cluster wild bootstrap, but am getting the following error: Error in constrain_zero(constraints = constraints, coefs = coefs) : Constraint indices must be less than or equal to 1. Question: What does this mean? Thankyou, Brendan Background (if relevant): I am running a purely exploratory series of meta-analyses of the relationships between several predictors and outcomes (i.e. n x m meta- analyses). There is non-independence within each predictor-relationship pair (some studies report multiple effect sizes for the same group of participants) and the effect sizes are nested. I am following the general workflow outlined here: (https://wviechtb.github.io/metafor/reference/misc-recs.html#general-workflow- for-meta-analyses-involving-complex-dependency-structures) and want to use cluster wild bootstrapping because some analyses have very few studies (and cluster-robust inference methods led to very wide confidence intervals) Minimal working example: ```{r} dat_temp_mwe <- structure(list(study = c("A", "B", "C", "D", "E", "E", "F", "F", "F", "F", "G"), effect_id = c(11, 28, 73, 93, 115, 232, 236, 242, 252, 266, 284), Paper = c("AA", "BB", "CC", "DD", "EE1", "EE2", "FF1", "FF2", "FF3", "FF4", "GG"), Mean_age_when_outcome_measured = c(21, 19, 26, 19, 19, 21, 21, 21, 21, 19, 19), yi = structure(c(- 0.0401817896328312, -0.0700000000000002, -0.151002873536528, -0.113328685307003, -0.139761942375159, -0.0392207131532808, -0.0487901641694324, -0.05, -0.041141943331175, -0.0421011760186351, -0.604315966853329 ), ni = c(1566, 844, 624, 355, 7449, 2135, 2410, 4853, 6912, 7842, 1202), measure = "GEN"), vi = c(0.00014647659162424, 0.000527143487544687, 0.00336452354486442, 0.00116040765035603, 0.00667694039383453, 9.6295107522168e-05, 9.44692075770055e-05, 0.000100003675148229, 2.50009187870589e-05, 2.50009187870581e-05, 0.0292124283937479 )), row.names = c(NA, 11L), class = c("escalc", "data.frame"), yi.names = "yi", vi.names = "vi", digits = c(est = 4, se = 4, test = 4, pval = 4, ci = 4, var = 4, sevar = 4, fit = 4, het = 4)) V <- vcalc(vi, cluster = study, obs = effect_id, time1 = Mean_age_when_outcome_measured, data = dat_temp_mwe, rho = 0.8, phi = 0.9) meta_analysis_output <- rma.mv( yi, V = V, random = ~ 1 | study / Paper / effect_id, data = dat_temp_mwe, control = list(rel.tol = 1e-8)) Wald_test_cwb(full_model = meta_analysis_robust, constraints = constrain_equal(1:3), R = 99, seed = 20201229) ```
Hi Wolfgang, Responses inline below. Cheers, James On Tue, Sep 17, 2024 at 2:34?AM Viechtbauer, Wolfgang (NP) <
wolfgang.viechtbauer at maastrichtuniversity.nl> wrote:
A follow-up question (for James) on my part. I saw this reponse by James: "If you only have 1 coefficient, then you don't need to go to the trouble of cluster wild bootstrapping--you can just use RVE with small sample corrections" ( https://github.com/meghapsimatrix/wildmeta/issues/17#issuecomment-1833956635 ) with respect to a model that includes an intercept and 1 moderator. So, if I understand this correctly, then cluster wild bootstrapping only becomes relevant (i.e., has some advantages over RVE) when the model includes more than 1 moderator?
A more precise statement is that the difference between CWB and RVE (with small-sample corrections as implemented in clubSandwich) hypothesis tests appears to be quite small for null hypotheses involving a single constraint. Probably the major use-case where this holds it tests of a single coefficient in a meta-regression model (regardless of whether that model has a single predictor or multiple predictors). The difference in calibration between CWB and RVE becomes more apparent for null hypotheses involving _multiple_ constraints, such as for omnibus tests of a model or for tests that the average effects are equal across levels of a categorical predictor with 3 or more categories. My statements here are supported by simulation findings in Joshi, Pustejovsky, and Beretvas (2022; https://jepusto.com/publications/cluster-wild-bootstrap-for-meta-analysis/index.html). The reason you might expect worse calibration for RVE-based tests of multiple-constraint hypotheses is that it is harder to find a good small-sample approximation to the null distribution of the test statistic when you're dealing with more than one constraint (in the simple case of a single constraint, the Satterthwaite approximation is well understood and works pretty well). The CWB test does better here by replacing brute-force computation for the hard mathy bits.
But then I also saw this issue: https://github.com/meghapsimatrix/wildmeta/issues/18 about the possibility to do CWB for a model with just an intercept term. So is CWB is in principle useful for this scenario?
I was interested in this as a hypothetical edge case moreso than as a practical issue. I would become relevant if one were trying to use CWB to construct a confidence interval for an average effect size because you'd need to profile across different values of beta. But the wildmeta package does not yet implement confidence intervals so this is just a hypothetical at this point.
And to answer the question in that issue: As was just discussed in another thread, it is possible to fit an rma.mv() model with only an intercept term where the coefficient is constrained to 0, with rma.mv(..., beta=0). This is undocumented, experimental, and implemented right now in a rather crude manner, but I am actually working on rma.mv() right now so that instead of profiling out the fixed effects (and then optimizing only over the variance/correlation components of the model), one can optimize over the fixed effects as well.
This is good to know! Thanks for the worked example.
Then one can also contrain coefficients in meta-regression models to 0. This is already possible with rma() when fitting location-scale models: library(metafor) dat <- dat.bcg dat <- escalc(measure="RR", ai=tpos, bi=tneg, ci=cpos, di=cneg, data=dat) res1 <- rma(yi, vi, mods = ~ 1, scale = ~ 1, data=dat, method="ML", optbeta=TRUE) res0 <- rma(yi, vi, mods = ~ ablat, scale = ~ 1, data=dat, method="ML", optbeta=TRUE, beta=c(NA,0)) res1 res0 fitstats(res0, res1) (with method="REML", leaving out a moderator is not the same as constraining its coefficient to zero, since in REML the model matrix affects the restricted log-likelihood, but under ML as shown above, these two approaches are identical). Best, Wolfgang
-----Original Message----- From: R-sig-meta-analysis <r-sig-meta-analysis-bounces at r-project.org>
On Behalf
Of Pearl, Brendan via R-sig-meta-analysis Sent: Sunday, September 15, 2024 01:13 To: R Special Interest Group for Meta-Analysis <r-sig-meta-analysis at r- project.org> Cc: Pearl, Brendan <Brendan.Pearl at mh.org.au> Subject: Re: [R-meta] Constraint rrror when using Wald_test_cwb Hi James, Thankyou for this. For anyone else who stumbles on this question, I also found that James
answered
it here:
1833956635
________________________________ From: R-sig-meta-analysis <r-sig-meta-analysis-bounces at r-project.org>
on behalf
of James Pustejovsky via R-sig-meta-analysis <
r-sig-meta-analysis at r-project.org>
Sent: Saturday, 14 September 2024 10:54:55 PM To: R Special Interest Group for Meta-Analysis Cc: James Pustejovsky Subject: Re: [R-meta] Constraint rrror when using Wald_test_cwb Wald_test_cwb() is for testing null hypotheses specified by a constraint
or
constraints on the model coefficients. In your MWE, you fit a summary
meta-
analysis with only one beta coefficient, so the constraints can only
refer to
that first coefficient (hence ?Constraint indices must be less than or
equal to
1?). As an aside, note that the same error would occur if you use the more
basic
Wald_test() from clubSandwich. James
On Sep 14, 2024, at 7:32?AM, Michael Dewey via R-sig-meta-analysis
<r-sig-
meta-analysis at r-project.org> wrote:
Dear Brendan When I run your MWE (after inserting library(metafor) I get Error in Wald_test_cwb(full_model = meta_analysis_robust, constraints =
constrain_equal(1:3), :
could not find function "Wald_test_cwb" Michael
On 14/09/2024 10:28, Pearl, Brendan via R-sig-meta-analysis wrote: Hello, I am trying to run a cluster wild bootstrap, but am getting the
following
error:
Error in constrain_zero(constraints = constraints, coefs = coefs) : Constraint indices must be less than or equal to 1. Question: What does this mean? Thankyou, Brendan Background (if relevant): I am running a purely exploratory series of meta-analyses of the
relationships between several predictors and outcomes (i.e. n x m meta- analyses).
There is non-independence within each predictor-relationship pair
(some
studies report multiple effect sizes for the same group of participants)
and the
effect sizes are nested.
I am following the general workflow outlined here:
(
for-meta-analyses-involving-complex-dependency-structures) and want to
use
cluster wild bootstrapping because some analyses have very few studies
(and
cluster-robust inference methods led to very wide confidence intervals)
Minimal working example:
```{r}
dat_temp_mwe <- structure(list(study = c("A", "B", "C", "D", "E",
"E", "F",
"F",
"F", "F", "G"), effect_id = c(11, 28, 73, 93, 115, 232, 236,
242, 252, 266, 284), Paper = c("AA", "BB", "CC", "DD", "EE1",
"EE2", "FF1", "FF2", "FF3", "FF4", "GG"),
Mean_age_when_outcome_measured
= c(21,
19, 26, 19, 19, 21, 21, 21, 21, 19, 19), yi = structure(c(-
0.0401817896328312,
-0.0700000000000002, -0.151002873536528, -0.113328685307003,
-0.139761942375159, -0.0392207131532808, -0.0487901641694324,
-0.05, -0.041141943331175, -0.0421011760186351, -0.604315966853329
), ni = c(1566, 844, 624, 355, 7449, 2135, 2410, 4853, 6912,
7842, 1202), measure = "GEN"), vi = c(0.00014647659162424,
0.000527143487544687,
0.00336452354486442, 0.00116040765035603, 0.00667694039383453,
9.6295107522168e-05, 9.44692075770055e-05, 0.000100003675148229,
2.50009187870589e-05, 2.50009187870581e-05, 0.0292124283937479
)), row.names = c(NA, 11L), class = c("escalc", "data.frame"),
yi.names =
"yi", vi.names = "vi", digits = c(est = 4,
se = 4, test = 4, pval = 4, ci = 4, var = 4, sevar = 4, fit = 4,
het = 4))
V <- vcalc(vi,
cluster = study,
obs = effect_id,
time1 = Mean_age_when_outcome_measured,
data = dat_temp_mwe,
rho = 0.8,
phi = 0.9)
meta_analysis_output <- rma.mv(
yi,
V = V,
random = ~ 1 | study / Paper / effect_id,
data = dat_temp_mwe,
control = list(rel.tol = 1e-8))
Wald_test_cwb(full_model = meta_analysis_robust,
constraints = constrain_equal(1:3),
R = 99,
seed = 20201229)
```
Thanks for the feedback! Just as a follow-up: I just pushed an update to GitHub that makes an (undocumented) 'optbeta' argument also available for rma.mv(). When optbeta=TRUE, then the optimization is carried out not only over the variance/correlation components of the model, but also the fixed effect (by default, the fixed effect are profiled out). This also allows constraining a fixed effect to a given value in a meta-regression model. With this, these two yield (essentially) identical results: rma.mv(yi, vi, random = ~ 1 | trial, data=dat, method="ML", optbeta=TRUE) rma.mv(yi, vi, mods = ~ ablat, random = ~ 1 | trial, data=dat, method="ML", optbeta=TRUE, beta=c(NA,0)) Best, Wolfgang
-----Original Message----- From: James Pustejovsky <jepusto at gmail.com> Sent: Tuesday, September 17, 2024 16:38 To: Viechtbauer, Wolfgang (NP) <wolfgang.viechtbauer at maastrichtuniversity.nl> Cc: R Special Interest Group for Meta-Analysis <r-sig-meta-analysis at r- project.org> Subject: Re: [R-meta] Constraint rrror when using Wald_test_cwb Hi Wolfgang, Responses inline below. Cheers, James On Tue, Sep 17, 2024 at 2:34?AM Viechtbauer, Wolfgang (NP) <wolfgang.viechtbauer at maastrichtuniversity.nl> wrote:
A follow-up question (for James) on my part. I saw this reponse by James: "If you only have 1 coefficient, then you don't need to go to the trouble of
cluster wild bootstrapping--you can just use RVE with small sample corrections" (https://github.com/meghapsimatrix/wildmeta/issues/17#issuecomment-1833956635)
with respect to a model that includes an intercept and 1 moderator. So, if I
understand this correctly, then cluster wild bootstrapping only becomes relevant (i.e., has some advantages over RVE) when the model includes more than 1 moderator? A more precise statement is that the difference between CWB and RVE (with small- sample corrections as implemented in clubSandwich) hypothesis tests appears to be quite small for null hypotheses involving a single constraint. Probably the major use-case where this holds it tests of a single coefficient in a meta- regression model (regardless of whether that model has a single predictor or multiple predictors). The difference in calibration between CWB and RVE becomes more apparent for null hypotheses involving _multiple_ constraints, such as for omnibus tests of a model or for tests that the average effects are equal across levels of a categorical predictor with 3 or more categories. My statements here are supported by simulation findings in Joshi, Pustejovsky, and Beretvas (2022; https://jepusto.com/publications/cluster-wild-bootstrap-for-meta- analysis/index.html). The reason you might expect worse calibration for RVE- based tests of multiple-constraint hypotheses is that it is harder to find a good small-sample approximation to the null distribution of the test statistic when you're dealing with more than one constraint (in the simple case of a single constraint, the Satterthwaite approximation is well understood and works pretty well). The CWB test does better here by replacing brute-force computation for the hard mathy bits.
But then I also saw this issue: https://github.com/meghapsimatrix/wildmeta/issues/18 about the possibility to do CWB for a model with just an intercept term. So is
CWB is in principle useful for this scenario? I was interested in this as a hypothetical edge case moreso than as a practical issue. I would become relevant if one were trying to use CWB to construct a confidence interval for an average effect size because you'd need to profile across different values of beta. But the wildmeta package does not yet implement confidence intervals so this is just a hypothetical at this point.
And to answer the question in that issue: As was just discussed in another
thread, it is possible to fit an rma.mv() model with only an intercept term where the coefficient is constrained to 0, with rma.mv(..., beta=0). This is undocumented, experimental, and implemented right now in a rather crude manner, but I am actually working on rma.mv() right now so that instead of profiling out the fixed effects (and then optimizing only over the variance/correlation components of the model), one can optimize over the fixed effects as well. This is good to know! Thanks for the worked example.
Then one can also contrain coefficients in meta-regression models to 0. This
is already possible with rma() when fitting location-scale models:
library(metafor) dat <- dat.bcg dat <- escalc(measure="RR", ai=tpos, bi=tneg, ci=cpos, di=cneg, data=dat) res1 <- rma(yi, vi, mods = ~ 1, scale = ~ 1, data=dat, method="ML",
optbeta=TRUE)
res0 <- rma(yi, vi, mods = ~ ablat, scale = ~ 1, data=dat, method="ML",
optbeta=TRUE, beta=c(NA,0))
res1 res0 fitstats(res0, res1) (with method="REML", leaving out a moderator is not the same as constraining
its coefficient to zero, since in REML the model matrix affects the restricted log-likelihood, but under ML as shown above, these two approaches are identical).
Best, Wolfgang
-----Original Message----- From: R-sig-meta-analysis <r-sig-meta-analysis-bounces at r-project.org> On
Behalf
Of Pearl, Brendan via R-sig-meta-analysis Sent: Sunday, September 15, 2024 01:13 To: R Special Interest Group for Meta-Analysis <r-sig-meta-analysis at r- project.org> Cc: Pearl, Brendan <Brendan.Pearl at mh.org.au> Subject: Re: [R-meta] Constraint rrror when using Wald_test_cwb Hi James, Thankyou for this. For anyone else who stumbles on this question, I also found that James
answered
it here: https://github.com/meghapsimatrix/wildmeta/issues/17#issuecomment- 1833956635