Skip to content
Prev 5097 / 5636 Next

[R-meta] Selection models from *reported p-values*

Yashvin,

This is an interesting question, which highlights a potential limitation of
existing meta-analytic selection models (at least those that I'm aware of).

Just to add a thought to Wolfgang's response: the reason that it would be
difficult to modify existing selection models to work with observed
p-values is that current models assume that the p-value is a direct
function of the effect size estimate and its standard error, and the effect
size estimates are the _outcomes_ in the model. So the model implies a
_distribution_ of p-values based on the data-generating process, and we
need to know what that distribution is. In particular, to work with an
observed p-value, we would need to know how the observed p-value is
functionally related to the effect size estimate, and this will depend on
lots of details about the effect size metric, study design, and analytic
methods (your method of calculating the effect size estimate and the
authors' method of calculating p-values).

For some types of transformations, I think the discrepancies will be quite
small.
* For example, say that the author reports a p-value for an untransformed
correlation coefficient, but you meta-analyze the results based on Fisher
z-transformation. For r near zero, the SE of the untransformed coefficient
will be quite close to the SE of the z-transformed coefficient, so using
one or the other will not make much difference at all.
* For another example, say that you do a multiplicative reliability
correction to a correlation coefficient. In this case, the SE of the
corrected coefficient should also be multiplied by the reliability
correction (that is, if we're treating the correction as a fixed constant),
and so the ratio of the corrected correlation to the corrected SE will be
the same as the ratio of the uncorrected correlation to the uncorrected SE,
and the p-value should be the same in both cases.

Finally, here's a potentially more problematic/controversial
counter-example. Say that you are meta-analyzing standardized mean
differences from randomized experiments with pre-test and post-test data,
and for sake of uniformity you are using a difference-in-differences
estimate for the numerator. But some of the primary studies use ANCOVA for
their analysis, so your ES estimate and SE and p-value will differ from
those based on the analysis reported in the primary study. Your analysis is
less precise than the primary study analysis, so your p-value will tend to
be larger than the primary study p-value. Further, maybe you are making an
assumption about the pre/post correlation rather than using the primary
study data to infer it, and this will introduce a further discrepancy.
Personally, I don't have a sense of how big a discrepancy in p-values you
can get in this situation. I think it's an interesting question that would
be worth looking into (and maybe carrying it through to investigating the
implications for the performance of meta-analytic selection models). But
pragmatically, the discrepancy could be resolved by using the information
from the primary analytic approach (ANCOVA) to calculate the effect size
estimate and its standard error, at least to the extent that this is
possible given the statistics reported in the primary study.

Best,
James

On Tue, Mar 5, 2024 at 7:17?AM Viechtbauer, Wolfgang (NP) via
R-sig-meta-analysis <r-sig-meta-analysis at r-project.org> wrote:

            

  
  
Message-ID: <CAFUVuJzukzNynRgWD4zo0+pkfW7tD+t1CEDJBqSyjGYH67ydTg@mail.gmail.com>
In-Reply-To: <AS8PR08MB9193A62398493407934C54488B222@AS8PR08MB9193.eurprd08.prod.outlook.com>