Dear list,
I am trying to run a random effects meta-analysis in MCMCglmm. I
have a data set of 960 effect size estimates and their standard
errors for eight traits (120 estimates per trait).
What I want to know is if there are any differences among traits in
both mean effect size and in the variance of their effect sizes.
So I have created a diagonal matrix of the standard errors (SE) and
used singular value decomposition to create a model matrix (Z) which
I have then fit using idv(Z) whilst fixing the variance to 1. I have
also estimated an effect of trait as random, and then estimated a
separate residual variance for each trait. My coding is below:
Rmat<-matrix(0,nrow(data),nrow(data))
diag(Rmat)<-data$SE
Rsvd<-svd(Rmat)
Rsvd<-Rsvd$v%*%(t(Rsvd$u)*sqrt(Rsvd$d))
data$row<-1:nrow(data)
data$row<-factor(data$row)
Z<-model.matrix(~row-1, data)%*%Rsvd
data$Z<-Z
prior=list(R=list(V=diag(8), nu=7),
G=list(G1=list(V=diag(1),nu=1,alpha.mu=rep(0,1),alpha.V=diag(1)*1000),G2=list(V=1,
fix=1)))
m1<-MCMCglmm(estimate ~ 1, random=~Trait + idv(Z), rcov=~
idh(Trait):units, data=data, prior=prior, family="gaussian",
pr=TRUE, burnin=40000, thin=100, nitt=140000)
I first wanted to check if this seemed sensible to people?
One issue is that all of the estimates are quite small numbers,
ranging from 0 to 0.14. I have tried multiplying all of my data
(effect sizes and their SE) by 10 and then re-running the exact same
model and this seems to make the posterior traces look much better.
So secondly I wanted to get an opinion on whether this is a sensible
thing to do?
Many thanks in advance for any advice anyone can provide.
Best wishes,
Matt
------------------------------------------------------
Dr. Matt Robinson
NERC Research Fellow
Department of Animal and Plant Science
University of Sheffield
Alfred Denny Building, Western Bank
Sheffield, S10 2TN, UK
matthew.r.robinson at sheffield.ac.uk
tel: +44 (0)114 222 4707
fax: +44 (0)114 222 0002
------------------------------------------------------
[[alternative HTML version deleted]]