Skip to content
Prev 5080 / 5636 Next

[R-meta] rma.mv metafor models for genome-wide meta-analyses are (too) conservative

Dear Wolfgang,

Thanks so much for getting back about this. 

*1) You have 130 estimates (of something) within each of 23 cohorts, so the
input to yi in the model below is of length 130*23=2990, correct?*
No, we have 130 estimates from 23 cohorts in total, i.e. 130 summary
statistics. Not all cohorts have all combinations of observations.

*2) And Vsampoverlap is therefore a 2990x2990 variance-covariance matrix,
which is block-diagonal with 23 130*130 blocks, correct?*
No, we have a 130 x 130 variance-covariance matrix with 23 blocks (i.e.
cohorts with repeated measures).

*3) Then you use the results from this model to compute 8 million predicted
values (for various combinations of COV1, COV2, and COV3) and compute 'z =
pred / se', correct?*
Correct. 

*4) Then you examine the distribution of these 8 million z-values and find
that they are too conservative (i.e., they are not quite standard normal,
but have an SD that is somewhat larger than 1 / more than 5% of the z-values
are >= +-1.96), correct?*

I guess so, we used other statistics to check this genome-wide: We checked
QQ plots, median Chi square and another measure for population
stratification (i.e. the LDSC h2 intercept).
The first remedy we have applied now is that we corrected for the actual
sample overlap within each block and this improved the p-value distribution
a lot, based on QQ plots and median Chi square distribution.

*6) I hope I got all of this correct. I am not familiar with the specifics
of this application, but here are some thoughts.*
Thank you!

*First of all, it is not entirely clear to me why you would expect the
predicted values to be standard normal in the first place. Maybe this is
similar to GWAS (in a single sample) where one would expect the vast
majority of SNPs not to be associated with some phenotype of interest so the
p-values should form a roughly uniform distribution, but this may not be
true when there is population stratification (or other things) going on and
one can use the distribution of p-values to check for this.*

Yes, this is what we did. We checked QQ plots, median Chi square and another
measure for population stratification (i.e. the LDSC h2 intercept). We have
an absence of population stratification. Thus, there is no inflation of
p-values but an over-correction of p-values. However, we are getting closer
to one now.

*You also wrote that the correlation matrix is 'scaled to the SE of each
BETA'. So does the diagonal of Vsampoverlap correspond to the standard
errors of the estimates or their sampling variances (squared standard
errors)? It should be the latter.*

This is correct, the diagonal of Vsampoverlap corresponds to the sampling
variances:
diag(SE) %*% as.matrix(corr) %*% diag(SE)


*The random effects structure of the model is quite simple, assuming a
constant correlation for every pair of estimates and a constant amount of
heterogeneity for the various estimates. These may be fairly sensible or
rather strict assumptions, depending on the application. Relaxing these
assumptions is tricky though, because with 130 different estimates, even
allowing the amount of heterogeneity to differ would introduce 129
additional parameters into the model and let's not even think about a fully
unstructured var-cov matrix for the random effects.

An interesting attempt might be to try:

res <- robust(model_cov, cluster=COHORT)

(or possibly also adding clubSandwich=TRUE) and then using 'res' for
obtaining the predicted values, but I am not sure how well cluster-robust
inferences methods will handle 130 estimates with 'only' 23 cohorts. The
idea is that even if Vsampoverlap and/or the random effects structure is
misspecified, cluster-robust inferences methods should give you
asymptotically correct standard errors / tests. But I somewhat doubt that
this works with so many estimates relative to the number of cohorts. Still
worth a try though.*

Thanks for the idea about the robust model. We will have a look!

Best wishes,
Beate
 






Beate St Pourcain, PhD
Senior Investigator & Group Leader
Room A207
Max Planck Institute for Psycholinguistics | Wundtlaan 1 | 6525 XD Nijmegen
| The Netherlands


@bstpourcain
Tel:?+31 24 3521964
Fax:?+31 24 3521213
ORCID: https://orcid.org/0000-0002-4680-3517
Web:
https://www.mpi.nl/departments/language-and-genetics/projects/population-var
iation-and-human-communication/
Further affiliations with:
MRC Integrative Epidemiology Unit | University of Bristol | UK
Donders Institute for Brain, Cognition and Behaviour | Radboud University |
The Netherlands

My working hours may not be your working hours. Please do not feel obligated
to reply outside of your normal working schedule.

-----Original Message-----
From: Viechtbauer, Wolfgang (NP)
<wolfgang.viechtbauer at maastrichtuniversity.nl> 
Sent: Monday, 19 February 2024 12:20
To: R Special Interest Group for Meta-Analysis
<r-sig-meta-analysis at r-project.org>
Cc: St Pourcain, Beate <Beate.StPourcain at mpi.nl>
Subject: RE: rma.mv metafor models for genome-wide meta-analyses are (too)
conservative

Dear Beate,

Just so I understand this correctly:

1) You have 130 estimates (of something) within each of 23 cohorts, so the
input to yi in the model below is of length 130*23=2990, correct?

2) And Vsampoverlap is therefore a 2990x2990 variance-covariance matrix,
which is block-diagonal with 23 130*130 blocks, correct?

3) Then you use the results from this model to compute 8 million predicted
values (for various combinations of COV1, COV2, and COV3) and compute 'z =
pred / se', correct?

4) Then you examine the distribution of these 8 million z-values and find
that they are too conservative (i.e., they are not quite standard normal,
but have an SD that is somewhat larger than 1 / more than 5% of the z-values
are >= +-1.96), correct?

I hope I got all of this correct. I am not familiar with the specifics of
this application, but here are some thoughts.

First of all, it is not entirely clear to me why you would expect the
predicted values to be standard normal in the first place. Maybe this is
similar to GWAS (in a single sample) where one would expect the vast
majority of SNPs not to be associated with some phenotype of interest so the
p-values should form a roughly uniform distribution, but this may not be
true when there is population stratification (or other things) going on and
one can use the distribution of p-values to check for this.

You also wrote that the correlation matrix is 'scaled to the SE of each
BETA'. So does the diagonal of Vsampoverlap correspond to the standard
errors of the estimates or their sampling variances (squared standard
errors)? It should be the latter.

The random effects structure of the model is quite simple, assuming a
constant correlation for every pair of estimates and a constant amount of
heterogeneity for the various estimates. These may be fairly sensible or
rather strict assumptions, depending on the application. Relaxing these
assumptions is tricky though, because with 130 different estimates, even
allowing the amount of heterogeneity to differ would introduce 129
additional parameters into the model and let's not even think about a fully
unstructured var-cov matrix for the random effects.

An interesting attempt might be to try:

res <- robust(model_cov, cluster=COHORT)

(or possibly also adding clubSandwich=TRUE) and then using 'res' for
obtaining the predicted values, but I am not sure how well cluster-robust
inferences methods will handle 130 estimates with 'only' 23 cohorts. The
idea is that even if Vsampoverlap and/or the random effects structure is
misspecified, cluster-robust inferences methods should give you
asymptotically correct standard errors / tests. But I somewhat doubt that
this works with so many estimates relative to the number of cohorts. Still
worth a try though.

Maybe some of these comments will be helpful or at least spark a bit more
discussion.

Best,
Wolfgang
estimation; Bullik-Sullivan 2015:
LC_MONETARY=Engl
metadat_1.2-
-------------- next part --------------
A non-text attachment was scrubbed...
Name: smime.p7s
Type: application/pkcs7-signature
Size: 7322 bytes
Desc: not available
URL: <https://stat.ethz.ch/pipermail/r-sig-meta-analysis/attachments/20240219/a70f4174/attachment-0001.p7s>