Skip to content

[R-meta] Assessing selection bias / multivariate meta-analysis

3 messages · James Pustejovsky, Pia-Magdalena Schmidt

#
Hi Pia,

You can ignore the warning messages that you're getting. (We haven't yet
worked out how to suppress them in the bootstrapping code.)

Your code looks good to me except for one subtle but potentially
consequential issue. Based on the multivariate summary meta-analyses, it
looks like you have a strongly negative average effect size. If the effects
are coded so that negative values represent improvement, then this needs to
be taken into account when fitting the selection models. The models
implemented in metaselection are based on one-side p-values, where the null
is mu <= 0 and the alternative is mu > 0 (i.e., positive effects are
improvements). We have not yet implemented an option to change the
direction of the null and alternative hypotheses (although it's high on my
to-do list). In the mean time, there are a few alternative ways that you
could modify the existing code to fit a more appropriate model. Either:
a) Recode effect sizes so that positive values correspond to improvement
(i.e., take yi = -yi) and interpret the beta estimate accordingly.
or
b) Change the threshold of the step function to steps = .975 and interpret
the lambda (selection parameter) estimate as the relative probability that
a statistically significant (and negative) effect size is reported compared
to the probability that a non-significant or counter-therapeutic effect
size is reported. For instance, if lambda = 4 then this means that a
statistically significant, therapeutic effect is four times more likely to
be reported than a non-significant or non-therapeutic effect.
Again, you only need do (a) or (b) but not both. It's possible that making
this change will shift the results in a meaningful way because the revised
model will be capturing a more plausible form of selective reporting.

James

On Thu, Nov 28, 2024 at 11:24?AM Pia-Magdalena Schmidt <
pia-magdalena.schmidt at uni-bonn.de> wrote:

            

  
  
2 days later
#
Dear James,

thank you so much for your detailed reply.

In fact, the negative effect size does not represent an improvement but a 
deterioration. However, I would still need to change the code for fitting 
the model as the problem of one-sided p-value and direction as described in 
your email remains. Unfortunately, I'm not sure which of your suggestions (a 
or b) I should go for. I have tried both (results below). Do you have 
further recommendations?

Best,
Pia


a)
1.
# 3PSM-bootstrap
set.seed(20240916)

system.time(
  mod_3PSM_boot <- selection_model(
  data = DatMA,
  yi = ES_all_spv$yi*-1,
  sei = sei,
  cluster = id_database,
  selection_type = "step",
  steps = .025,
  CI_type = "percentile",
  bootstrap = "multinomial",
  R = 1999
  )
)
print(mod_3PSM_boot, transf_gamma = TRUE, transf_zeta = TRUE)

  param Est SE percentile_lower percentile_upper
  beta 0.702 0.159 0.3011 1.059
  tau2 0.325 0.186 0.0893 1.756
  lambda1 0.165 0.138 0.0244 0.772


2.
# 3PSM-bootstrap
set.seed(20240916)

system.time(
  mod_3PSM_boot <- selection_model(
  data = DatMA,
  yi = ES_LOR_spv$yi*-1,
  sei = sei,
  cluster = id_database,
  selection_type = "step",
  steps = .025,
  CI_type = "percentile",
  bootstrap = "multinomial",
  R = 1999
  )
)

print(mod_3PSM_boot, transf_gamma = TRUE, transf_zeta = TRUE)
  param Est SE percentile_lower percentile_upper
  beta 0.529 0.163 -2.11e-01 0.876
  tau2 0.210 0.119 3.07e-02 0.731
  lambda1 0.157 0.144 6.00e-17 0.697




b)
1.
system.time(
  mod_3PSM_boot <- selection_model(
  data = DatMA,
  yi = ES_all_spv$yi,
  sei = sei,
  cluster = id_database,
  selection_type = "step",
  steps = .975,
  CI_type = "percentile",
  bootstrap = "multinomial",
  R = 1999
  )
)

print(mod_3PSM_boot, transf_gamma = TRUE, transf_zeta = TRUE)
param Est SE percentile_lower percentile_upper
  beta -0.702 0.159 -1.0594 -0.301
  tau2 0.325 0.186 0.0893 1.756
  lambda1 6.073 5.092 1.2949 40.908



2.
# 3PSM-bootstrap
set.seed(20240916)
## Startpunkt des Zufallszahlengenerators = wird als Seed bezeichnet.

system.time(
  mod_3PSM_boot <- selection_model(
  data = DatMA,
  yi = ES_LOR_spv$yi,
  sei = sei,
  cluster = id_database,
  selection_type = "step",
  steps = .975,
  CI_type = "percentile",
  bootstrap = "multinomial",
  R = 1999
  )
)

print(mod_3PSM_boot, transf_gamma = TRUE, transf_zeta = TRUE)
param Est SE percentile_lower percentile_upper
  beta -0.529 0.163 -0.8756 2.11e-01
  tau2 0.210 0.119 0.0307 7.31e-01
  lambda1 6.363 5.848 1.4342 3.18e+15



On So, 1 Dez 2024 10:18:05 -0600
James Pustejovsky <jepusto at gmail.com> wrote:
#
Hi Pia,

The two methods give equivalent information when accounting for the
transformations involved:
* The beta estimate using (a) is the negative of the beta estimate using
(b) (and the CIs are also just negated)
* The tau2 estimates are identical using (a) and (b)
* The lambda1 estimate using (a) is the inverse of the lambda estimate
using (b) (and the CIs are also just inverted)

Because they're providing equivalent information, I'd suggest presenting
the model results using whichever way makes more sense to you and is most
easily aligned with the other analysis you're doing. Personally, I would
use (b) so that the beta estimate from the 3PSM can be compared to the beta
estimate without any selective reporting adjustment.

James

On Tue, Dec 3, 2024 at 2:19?PM Pia-Magdalena Schmidt <
pia-magdalena.schmidt at uni-bonn.de> wrote: