-----Original Message-----
From: r-sig-mixed-models-bounces at r-project.org
[mailto:r-sig-mixed-models-bounces at r-project.org] On Behalf
Of Rolf Turner
Sent: Tuesday, July 29, 2008 6:48 PM
To: R-Sig-mixed-models
Subject: [R-sig-ME] Fitting a ``trivial'' model with lmer().
I sent the message, to be found below, to Doug Bates just now.
Since he is flat out preparing a number of workshops, he
suggested that I post to this SIG to see if I could get some
enlightenment.
Ergo, here goes:
I wrote (to the general R-help list; 28 July 2008):
<snip>
How can I (is there any way that I can) tell lmer() to fit
general possible covariance structure?
and Doug Bates responded:
It sounds like you want a model formula of
lmer(y ~ tstnum + (0 + tstnum|stdnt), data=schooldat)
but that model will have 21 variance-covariance terms to
if you count the residual variance but that one gets
the optimization). I would not be surprised if the estimated
variance-covariance matrix for the random effects turns out to be
singular.
I tried the above formulation and it (as I posted yesterday)
gives a warning about a false convergence message from
nlminb. I think there are indeed singularity problems, or
rather problems with a certain indeterminateness.
I think that the forgoing formulation translates into
``mathematical expressions'' as
y_ij = mu + alpha_i + s_j + e_ij
(with treatment ``contrasts'' giving the constraint that
alpha_1 = 0) where alpha_i corresponds to the i-th time, s_j
corresponds to the j-th student, and the e_ij are (Gaussian)
errors with mean 0 and a covariance structure
Cov(e_ij, e_{i',j'}) = 0 if j != j'
= gamma_{i,i'} if j = j'
Translated into my ``trivial model'' thinking, this can be
expressed as
Y_j = MU + s_j + E_j
where the Y_j and E_j are 6-dimensional vectors and the
addition in the forgoing follows the S/R convention whereby
the scalar s_j is added to each component of the vectors.
And of course MU is the 6 dimensional vector of mean values.
The vectors Y_j are independent N(MU,Sigma) where Sigma = Gamma +
sigma^2
where Gamma = [gamma_ij] and sigma^2 is the variance of the
(independent, mean 0) s_j. I believe that sigma^2 is the
residual variance in the
lmer() formulation of the model.
The covariance matrix Sigma is well-determined/estimable but
Gamma and sigma^2 are not --- we can ``sensibly'' have Gamma
= Sigma - sigma^2 for any value of sigma^2 such that Sigma -
sigma^2 is positive definite. In general there will be a wide
range of such values of sigma^2.
It seems to me that the s_j are redundant in the model; we just need:
y_ij = mu + alpha_i + e_ij
Is there a way to formulate the foregoing model in lmer()?
Effectively, from one p.o.v., one wants to constrain the
sigma^2 variance component to be 0.
Of course one doesn't need lmer() to fit such a model. I
just want to get some understanding of the modelling syntax
in lmer() and perhaps be able to fit more complicated models
which are related to the foregoing ``trivial'' one.
Thanks.
cheers,
Rolf Turner
######################################################################
Attention:\ This e-mail message is privileged and
confid...{{dropped:9}}