Skip to content

Comparing variance components

2 messages · Dean Castillo, Jarrod Hadfield

#
Hi Jarrod,

I have been trying to test a similar hypothesis for a while with only
limited success, so I wanted to thank you for your answer to Wen's
question. I did have a few additional questions of clarification, some
specific to my own data analysis.

For model.mcmc.a I think it is pretty straightforward to compare the
posterior distributions. For model.mcmc.b, now that you explicitly modeled
heterogenous variances for the residuals is it more informative to examine
the IC correlation for G for each Exp rather than the variances themselves?

For my specific problem I have binomial data. I have been modeling the data
as proportions (bounded by 0-1 but can logit or arcsinsqrt transform) as
well as modeling it using "multinomial2" on the raw data.

The issue I am running into is that the variance of G for one of the
experimental blocks is very close to the boundary condition, while the
other is larger, when modeled as a proportion, and the HPD are very wide. I
have been using inv-gamma priors and will play around with the parameter
expanded priors as suggested in your course notes.

For the multinomial2 model should the priors for the variance be the same?
The posterior means are not as close to the boundary condition (Exp1:G=0.8,
Exp2:G=0.3) but the HPD are still very wide (1e-17,1.13) for Exp2:G.

Any help is greatly appreciated

Dean

  
  
1 day later
#
Hi,

Whether you should focus on differences in the intra-class correlation 
or the raw variances depends on the question. If it is suspected that 
the difference is due  to differences in scale then I guess testing the 
IC would make more sense, because a change in scale would increase both 
variance components proportionally and the IC should remain the same. 
There is actually a way of just modelling scale differences in MCMCglmm 
using path analysis. It sounds odd but jut regress the response against 
itself in one of the experimental blocks. This works because y = y*beta 
+ eta after a little rearangement becomes

y = eta/(1-beta)

and so 1/(1-beta) where beta is the path coefficient is an estimate of 
the scale (or 1/(1-beta)^2 the change in variance).  For example, lets 
say Vg=Ve=1 in the first block but Vg=Ve=4 in the second (i.e. the scale 
is 2).

G<-gl(50,6)
g<-rnorm(50)
Exp<-gl(2,150)

y<-rnorm(300)+g[G]

y[which(Exp==2)]<-2*y[which(Exp==2)]

my_data<-data.frame(y=y, Exp=Exp, G=G)

model.mcmc.c<-MCMCglmm(y~sir(~units:at.level(Exp,2),~units:at.level(Exp,2)), 
random=~G, rcov=~units, data=my_data)

plot(1/(1-model.mcmc.c$Lambda))

Note that you can't just add y as a predictor in standard (g)lm(m) 
software because the likelihood is not in standard form.

Regarding the variances, I usually use scaled F(1,1) priors in binomial 
models too. However, the long tail can be a bit of a problem in these 
models if the data set is small. In extreme cases it can lead to 
numerical problems: check the latent variables don't lie outside the 
range -25/25 for logit models.  Reducing the scale in the prior can help.

Cheers,

Jarrod
On 17/02/2016 16:15, Dean Castillo wrote:
-------------- next part --------------
An embedded and charset-unspecified text was scrubbed...
Name: not available
URL: <https://stat.ethz.ch/pipermail/r-sig-mixed-models/attachments/20160219/c6290bdc/attachment-0001.pl>