Skip to content
Prev 1806 / 20628 Next

Teaching Mixed Effects

Wow.
  Two very small points:
Even if we are not p-value obsessed, we would still presumably
like to be able make some kind of (even informal) inference from
the difference in fits, perhaps at the level of "model 1 fits
(much better|a little better|about the same|a little worse|much worse)
than model 2", or "the range of plausible estimates for this
parameter is (tiny|small|moderate|large|absurdly large)". To
do that we need some kind of metric (if we have not yet fled
to Bayesian or quasi-Bayesian methods) for the range of
the deviance under some kind of null case -- for example,
where should we set cutoff levels on the likelihood profile
to determine confidence regions for parameters? Parametric
bootstrap makes sense, although it is a little scary to think
e.g. of doing a power analysis for such a procedure ...
I agree that there is a non-zero probability that the _estimate_
will be exactly zero, but my point is that there is really no chance
in reality that species, blocks, or other random effects will
not vary at all ... (sorry for the convolution of that last sentence)

  Ben Bolker