Skip to content

prior specification in MCMCglmm

4 messages · Ned Dochtermann, Celine Teplitsky, Pierre de Villemereuil

#
There's a mistake in my email below.
The generalization of an inverse-gamma prior for multi-response models
should, I think, just be:
list(V=diag(x), nu=1.002) 

Sorry about that,
Ned




-----Original Message-----
From: Ned Dochtermann [mailto:ned.dochtermann at gmail.com] 
Sent: Tuesday, April 26, 2011 1:12 PM
To: 'r-sig-mixed-models at r-project.org'; 'bonamy at horus.ens.fr'
Subject: Re: [R-sig-ME] prior specification in MCMCglmm

Celine and Pierre,

I too am still very unclear on priors however a month ago Jarrod replied to
some questions of mine regarding multi-response models with this:

	"From experience I find

	list(V=diag(2), nu=2, alpha.mu=c(0,0), alpha.V=diag(2)*a))

	where a is something large (e.g. 1000, depending on the scale of the
	data) works well for the two standard deviations and the
correlation, in terms of informativeness.

	You can't use parameter expanded priors for the residual term yet,
so you will have to stick with the 	standard inverse-Wishart (or use
another program). Generally, data are highly informative for the residual
part so often the posterior is not very sensitive to the prior
specification. Nevertheless, you should 	check alternatives:

	V=diag(2), nu=1.002 gives the inverse-gamma prior for the variances
with shape=scale=0.001 V=diag(2)*1e-6, 	nu=3 is flat for the correlation
from -1 to 1"

I think that this would generalize to G1=(list(V=diag(x), nu=1.00x)) for x
response variables for an inverse-gamma prior for G but I'm not entirely
positive that such is the case. I'm also not positive how to generalize the
flat prior (mainly would nu stay at three?), the first one I assume
generalizes to (V=diag(x), nu=x, alpha.mu=c(0,0...0x),alpha.V=diag(x)*a). 

The whole thread for this discussion starts here:
https://stat.ethz.ch/pipermail/r-sig-mixed-models/2011q1/005694.html

The whole prior issue is still a mystery to me so I continue to be
uncomfortable with those approaches where they're necessary; nonetheless for
some data I'm analyzing they're basically necessary (e.g. multi-response
generalized mixed models). By and large the univariate analyses I've
conducted produce highly concordant results regardless of whether fitting
via REML, ML, Laplace or MCMC and regardless of priors for the latter. This
gives me a bit of confidence in the quantitative results and quite a bit of
confidence in the inferences. Unfortunately such comparisons aren't as
feasible (for me) with multi-response models.

Ned

--
Ned Dochtermann
Department of Biology
University of Nevada, Reno

ned.dochtermann at gmail.com
http://wolfweb.unr.edu/homepage/mpeacock/Ned.Dochtermann/
http://www.researcherid.com/rid/A-7146-2010
--

Message: 5
Date: Tue, 26 Apr 2011 09:41:58 +0200
From: "Pierre B. de Villemereuil" <bonamy at horus.ens.fr>
To: Celine Teplitsky <teplitsky at mnhn.fr>
Cc: r-sig-mixed-models at r-project.org
Subject: Re: [R-sig-ME] prior specification in MCMCglmm
Message-ID: <4DB67746.4000607 at horus.ens.fr>
Content-Type: text/plain

Dear Ciline,

I'm not very comfortable with covariance priors, but my guess is that, 
is this case, you've got to really specify a inverse-Wishart as a prior.

You should check into Hadfield's article introducing MCMCglmm, they use 
something like V=diag(dimV)/4 and n=dimV. Why ? I have no idea. If your 
data are not standardized, I don't think you should divide by 4 (but 
then, you specify your prior as if your guess for variance components is 
that they all equal 1), but for the rest...

Sorry I can not help further. Maybe somebody else would be able to help 
on this subject ?

Pierre.


Le 26/04/2011 09:17, Celine Teplitsky a icrit :
------------------------------

Message: 6
Date: Tue, 26 Apr 2011 11:53:02 +0200
From: peter dalgaard <PDalgd at gmail.com>
To: Junqian Gordon Xu <xjqian at gmail.com>
Cc: r-sig-mixed-models at r-project.org
Subject: Re: [R-sig-ME] interaction term in null hypothesis
Message-ID: <E1153437-757B-42E9-9A72-89EF9A368101 at gmail.com>
Content-Type: text/plain; charset="us-ascii"
On Apr 26, 2011, at 00:05 , Junqian Gordon Xu wrote:

            
It does make sense, at least sometimes. For one thing, such interactions are
often aliased to aspects of the experimental design; e.g., if you have
randomized different treatment to left side and the right side of test
subjects, then the random interaction is equivalent to the (random)
difference between the two sides within the same subject. Also, you could
conceivably have a kill-or-cure drug with positive effects for some and
negative effects for others, with the question being whether the total
effect is positive or negative.

  
    
#
Ned,

thanks a lot, I'll try that and compare results with the different priors.

What I just begin to understand is that nu is not really a degree of  
belief you can play with but a more specific parameter of a  
distribution, right?

All the best

Celine

  
    
#
That's my understanding as well. I think that the shape and
"informativeness" of the prior are essentially conflated. For example, a
flat prior would not be informing the posterior much so its shape and the
amount of information imparted are related.

Of course it is equally likely that I'm completely wrong on this. 

Ned

--
Ned Dochtermann
Department of Biology
University of Nevada, Reno

ned.dochtermann at gmail.com
http://wolfweb.unr.edu/homepage/mpeacock/Ned.Dochtermann/
http://www.researcherid.com/rid/A-7146-2010
--


-----Original Message-----
From: Celine Teplitsky [mailto:teplitsky at mnhn.fr] 
Sent: Wednesday, April 27, 2011 7:37 AM
To: Ned Dochtermann
Cc: r-sig-mixed-models at r-project.org; bonamy at horus.ens.fr
Subject: Re: [R-sig-ME] prior specification in MCMCglmm

Ned,

thanks a lot, I'll try that and compare results with the different priors.

What I just begin to understand is that nu is not really a degree of  
belief you can play with but a more specific parameter of a  
distribution, right?

All the best

Celine
to
correlation
the
for
This
of
are

  
    
#
The relationship between shape and informativeness is not that easy, I 
think. First because "informativeness" is quite a difficult notion to 
define. But also because everything depends on what you are looking for.

For example, a flat uniform prior on standard deviation would leads to 
overestimation as stated in Gelman (2006). Besides, your prior on the 
variance will be the uniform squared which is not flat anymore. So, 
which one has to be flat ?

Or, another example, a weakly informative inverse-Gamma(0.001,0.001) on 
an additive variance component (in an animal model) will leads to a huge 
mass weight in 0 or 1 in the heritability scale, which could be (or not, 
depending on the data) a problem.

Thus, I would say that one has to carefully look at the shape of the 
prior (and to compare with the posterior to check the update), having in 
mind what he wants to estime (the standard deviation ? the variance ? 
heritability ?), but "the flatter, the better" is not always true in 
every scale.

But considering the question of what is "nu" for MCMCglmm, it is hard to 
answer not knowing how the distribution is really specified. If I refer 
to the notation of the Wikipedia webpage (about inverse Wishart), it 
looks like nu is the canonical "degree of freedom" (called m in the 
webpage), but that V is defined like V=m*Psi (always refering to WP 
notations). This stands for the univariate case (p=1), but what happens 
for the multivariate ?

I think your should trust Jarrod on this subject, try the different 
solutions Ned brought to us and see what happens !

Pierre.

Le 27/04/2011 18:40, Ned Dochtermann a ?crit :