Skip to content
Prev 1285 / 20628 Next

understanding log-likelihood/model fit

I'll take a swing at these.
On Wed, Aug 20, 2008 at 02:01:29PM +0100, Daniel Ezra Johnson wrote:
A common way for hierarchical models to be fit using ML is by
profiling out the fixed effects, estimating the random effects, and
then using GLS to estimate the fixed effects conditional on the random
effects.  So, any explanatory capacity that the fixed effects offer is
deployed before the random effects are invoked.  

Likewise a popular way to applying ReML is to fit the fixed effects
using OLS, then estimate the random effects from the residuals.
Again, the net effect is that any explanatory capacity that the fixed
effects offer is deployed before the random effects are invoked.
The deviance is computed from the log likelihood of the data,
conditional on the model.  The LL of the null model is maximized by
making the variance components big enough to cover the variation of
the data.  But, this means that the likelihood is being spread thinly,
as it were.  Eg ...
[1] 0.3989423
[1] 0.1994711

On the other hand, fixed1 uses fixed effects to explain a lot of that
variation, so that when the time comes to estimate the random effects,
they are smaller, and the LL is higher, because it doesn't have to
stretch so far.

Cheers,

Andrew