Skip to content
Prev 19007 / 20628 Next

glmmTMB: phi in betabinomial dispersion model

Honestly, the BB parameterization in glmmTMB was chosen because we 
(I) weren't aware of the other options. The one we use is the most 
"natural" way I can think of to construct the BB (compounding a binomial 
with its conjugate prior distribution, analogous to a 
Dirichlet-multinomial or negative binomial).

   Looking at the Prentice paper it seems useful; the two bits that 
would be hard would be (1) rather than a simple closed-form expression 
it is expressed as the combination of sums: for params p (prob) (q=1-p, 
I think) and gamma (dispersion), the log-likelihood of y out of n 
successes is

sum(i=0,y-1) log(p+gamma*i) + sum(i=0,n-y-1) log(q+gamma*i) - 
sum(i=0,n-1) log(1+gamma*i)

  This is not as bad as log-likelihoods that have to be computed via 
infinite sums, but it will presumably be a lot slower than a likelihood 
that involves only log-factorial and log-beta functions, sums and products	

  The other thing that looks potentially tricky is that the lower bound 
of the dispersion parameter is data-dependent:

  y >= max{ -p(n - 1)1, -q(n - 1)-1}

   Since p, q, n will vary across a given data set (and p and q will 
depend on the parameters of the model for the conditional mean), it's 
not immediately obvious to me how we would implement this (glmmTMB 
typically works by fitting parameters on an unconstrained space) ... 
(there are some cruder approaches that would involve penalization ...)

   Feel free to open an issue in the glmmTMB github repository ...
On 1/14/21 3:30 AM, John Maindonald wrote: