Skip to content
Prev 3828 / 20628 Next

sanity-checking plans for glmer

Mitchell Maltenfort wrote:
[cross-posting to multiple R lists is discouraged.]

  How difficult this is, practically, depends on how many random-effects
levels you have (in the random effect with the minimum number of
random-effect levels).  If this number is large (>50?  Angrist and
Pischke give "42" as the rule of thumb in their Douglas-Adams-themed
econometrics book ...), then you can get away with Wald-Z or likelihood
ratio inferences without worrying about adjusting for finite-sample
effects (i.e., use anova() or the s.e.'s and p-values provided by
glmer).  For the random effects, use anova() [or possibly the RLRsim
package, although I don't know if it will work in this case].
Confidence intervals on the random effects are harder; see the "standard
 errors of variance estimates" section in <http://glmm.wikidot.com/faq>.
 (It says there that you can use lme4a, the bleeding-edge R-forge
version of lme4, for profiles, but as of a couple of days ago profiles
weren't available for GLMMs ...)

   pvals.fnc from the languageR function hasn't worked for a while, as
far as I know, because it uses the mcmcsamp() function which has been
disabled for some months (due to difficulty getting the sampling
algorithm to mix reliably).

  MCMCglmm would be a reasonable alternative -- I don't know how slow it
would be with 10,000 items, I have used it successfully for a problem
with 1600, so it should definitely work given that you're willing to be
patient.

  good luck,
    Ben Bolker