Skip to content

models with no fixed effects

8 messages · Daniel Farewell, Douglas Bates, Peter Dixon +3 more

#
I'm running into an error when using lmer to fit models with no fixed effects terms.

For example, generating some data with

df$y <- with(df <- data.frame(i = gl(5, 5), b = rep(rnorm(5), each = 5)), b + rnorm(25))

and fitting like this

fit1 <- lmer(y ~ 1 + (1 | i), df)

works fine. But fitting like this

fit0 <- lmer(y ~ 0 + (1 | i), df)

gives the following error:

CHOLMOD error: Pl?
Error in mer_finalize(ans) : 
  Cholmod error `invalid xtype' at file:../Cholesky/cholmod_solve.c, line 971

Am I missing something obvious?

Many thanks,

Daniel

Here's my sessionInfo(), in case it's useful:

R version 2.7.1 (2008-06-23) 
i386-apple-darwin8.10.1 

locale:
en_GB.UTF-8/en_GB.UTF-8/C/C/en_GB.UTF-8/en_GB.UTF-8

attached base packages:
[1] stats     graphics  grDevices utils     datasets  methods   base     

other attached packages:
[1] lme4_0.999375-26   Matrix_0.999375-15 lattice_0.17-8    

loaded via a namespace (and not attached):
[1] grid_2.7.1
#
On Thu, Sep 11, 2008 at 8:03 AM, Daniel Farewell
<farewelld at cardiff.ac.uk> wrote:
Admittedly that is a rather obscure error message.  It is related to
the fact, apparently not verified, that we should have p, the number
of fixed-effects, greater than zero.

I should definitely add a check on p to the validate method.  (In some
ways I'm surprised that it got as far as mer_finalize before kicking
an error).  I suppose that p = 0 could be allowed and I could add some
conditional code in the appropriate places but does it really make
sense to have p = 0?  The random effects are defined to have mean
zero.  If you have p = 0 that means that E[Y] = 0.  I would have
difficulty imagining when I would want to make that restriction.

Let me make this offer - if someone could suggest circumstances in
which such a model would make sense, I will add the appropriate
conditional code to allow for p = 0. For the time being I will just
add a requirement of  p >  0 to the validate method.
#
Peter Dixon wrote:
Wouldn't you still need the intercept?  The fixed effect tells you 
whether on average the difference differs from zero.  The random effect 
estimates tell you by how much each individual's difference differs from 
the mean difference.

A
#
On Thu, Sep 11, 2008 at 4:09 PM, Peter Dixon <peter.dixon at ualberta.ca> wrote:

            
I had the same initial thoughts, but for me, the intercept tells me
how much of the difference remains unexplained by the covariates.
Fixing that at 0 is the same as infinitely strong prior belief that
there are no other possible explanations for the observed differences.

-Aaron
#
On 11 Sep 2008, at 22:06, Peter Dixon wrote:

            
I /think/ I get this, by analogy with how I use AIC/BIC/LRTs to test  
predictors.  But still a bit confused.  The two models are:

   y_ij = a + b_j + e_ij     (1)
   y_ij = c_j + e_ij         (2)

Suppose a != 0 in model 1.  Then in model 2:

    c_j = b_j + a.

(Maybe it's not as simple as this!)  But I'm not sure what effect that  
would have on the e_ij's - and my intuition says that's what's going  
to affect the fit.  Also I would have thought model 2 would give a  
better fit since having one fewer predictor is going to have less of a  
penalising effect in the AIC and BIC.

Andy
#
Not including an intercept term can indeed induce
spurious correlation in linear regression, this
issue is reviewed in Richard Kronmal's excellent paper

 Spurious Correlation and the Fallacy of the Ratio Standard Revisited
 Richard A. Kronmal
 Journal of the Royal Statistical Society. Series A 
  (Statistics in Society), Vol. 156, No. 3 (1993), pp. 379-392 

which could no doubt be readily extended to cover
mixed effect models by A Real Statistician.

The cost of including an intercept term is small
relative to the havoc that can be reaped by not
including one.

Steven McKinney