Skip to content
Prev 11454 / 20628 Next

Modelling heterogeneous variances for an interaction term in MCMCglmm?

Hi Jackie,

The reason that it does not run is that each observation needs to be  
associated with a single effect. For ease, imagine two factors fac1  
and fac2 have 2-levels each (A and B, and C and D respectively).  
MCMCglmm removes the intercept from formula inside variance functions  
(e.g. idh) so the R-structure model is

fac1*fac2-1

which for the four types of observation gives:

    fac1A fac1B fac2D fac1B:fac2D
AC     1     0     0           0
AD     1     0     1           0
BC     0     1     0           0
BD     0     1     1           1

This means that the (residual) variance for AD is actually  
V[1,1]+V[3,3]+V[1,3]+V[3,1] where V is the estimated covariance  
matrix. This makes interpretation difficult, and actually MCMCglmm  
does not allow you to fit this type of R-structure (although it does  
allow the G-structures to be of this form).


Perhaps a better way of doing it is to fit idh(fac1:fac2) because this gives

    fac1A:fac2C fac1A:fac2D fac1B:fac2C fac1B:fac2D
AC           1           0           0           0
AD           0           1           0           0
BC           0           0           1           0
BD           0           0           0           1

which means that there is a one-to-one mapping between the diagonal  
elements of V and the variance within each factor combination.  
MCMCglmm will allow you to fit this type of R-structure. The next  
version of MCMCglmm (which will probably be released next week) will  
issue more sensible warnings when invalid R-structures are specified.

Note however, that if you fit idh(fac1:fac2) for all sources of  
variance (residual and random effects) and you have fac1:fac2-1 in the  
fixed formula then the analysis is exactly equivalent to fitting  
separate models to the data from each factor combination.

Applying a function to a posterior distribution results in a valid new  
posterior distribution. It therefore makes more sense to evaluate the  
functions for the folded normal for each MCMC iteration to give a  
posterior distribution for the expected absolute value of selection.  
This is advantageous because mean(f(x)) is not always equal to  
f(mean(x)) when f is non-linear and it also allows you to use things  
like HPDinterval in order to determine the uncertainty in your  
inferences. Note that it is hard to know what influence a prior placed  
directly on x will do for inferences about x, let alone on f(x).

Cheers,

Jarrod






Quoting Jackie Wood <jackiewood7 at gmail.com> on Thu, 13 Feb 2014  
12:44:24 -0500: