Simple explanation of the lme Algorithms?
On Sun, Mar 25, 2007 at 12:31:12PM -0500, Douglas Bates wrote:
On 3/25/07, Andrew Robinson <A.Robinson at ms.unimelb.edu.au> wrote:
Hi everyone,
I'm trying to figure out how the loops work together in lme(). I
understand that we start with some EM iterations to get close to the
optimum, and then switch to Newton-Raphson (eg Pinheiro and Bates
2000, p. 80). However, I can't reconcile that understanding with my
reading of the lmeControl switches. There, I see
maxIter: maximum number of iterations for the 'lme' optimization
algorithm. Default is 50.
msMaxIter: maximum number of iterations for the 'nlm' optimization
step inside the 'lme' optimization. Default is 50.
niterEM: number of iterations for the EM algorithm used to refine the
initial estimates of the random effects variance-covariance
coefficients. Default is 25.
Clearly, niterEM covers the initial EM portion, and I guess that
msMaxIter refers to the invocation of nlm, which performs the Newton
Raphson optimization, but then what role does maxIter play, and what
is 'lme' optimization
IIRC the maxiter setting is for cases where there is a variance function or a correlation function in the model specification. Conditional on parameters for the variance function or the correlation function or both, the parameters in the mixed-effects specification are optimized, then the parameters in the variance or correlation function are updated then ...
Thanks for your response, Doug. So, if I may paraphrase, does maxIter refer to the maximum number of Newton-Raphson steps allowed for the updating (by which I guess that you mean estimation) of the parameters in the variance or correlation function, having conditioned on the other fixed and random effects? If my interpretation is correct then I'm afraid that I'm still confused. For example, if I compare the output of
fm1 <- lme(distance ~ age, data = Orthodont, random=~1|Subject,
+ control=list(msVerbose=TRUE)) 0 320.256: -0.390137 1 320.256: -0.390137 with
fm1 <- lme(distance ~ age, random=~1|Subject, data = Orthodont,
+ weights=varPower(), control=list(msVerbose=TRUE)) 0 320.256: -0.390137 0.00000 1 320.254: -0.390137 0.00377348 2 320.251: -0.402940 0.00405120 3 320.109: -0.793763 0.127338 4 319.381: -4.39127 1.26259 5 319.359: -5.08632 1.48580 6 319.359: -5.08547 1.48633 7 319.359: -5.08607 1.48651 0 319.985: -5.08607 1.48651 1 319.984: -5.08517 1.48796 2 319.984: -5.07930 1.48451 3 319.966: -4.93970 1.43987 4 319.874: -3.63622 1.02756 5 319.872: -3.44497 0.967633 6 319.872: -3.44235 0.966866 7 319.872: -3.44233 0.966860 0 319.721: -3.44233 0.966860 1 319.721: -3.44261 0.966623 ... etc then I see that there are indeed an inner and outer loop with the addition of the variance function, but both parameters appear to be continuously updating. So I guess that my interpretation is incorrect. Can you please help me clarify? Andrew
If I've missed the obvious page in P&B 2000, or the obvious paper, then I apologize: please let me know! I tried to find a copy of Bates, D.M. and Pinheiro, J.C. (1998) "Computational methods for multilevel models" available in PostScript or PDF formats at http://franz.stat.wisc.edu/pub/NLME/ but franz appears to be down. Cheers, Andrew -- Andrew Robinson Department of Mathematics and Statistics Tel: +61-3-8344-9763 University of Melbourne, VIC 3010 Australia Fax: +61-3-8344-4599 http://www.ms.unimelb.edu.au/~andrewpr http://blogs.mbs.edu/fishing-in-the-bay/
Andrew Robinson Department of Mathematics and Statistics Tel: +61-3-8344-9763 University of Melbourne, VIC 3010 Australia Fax: +61-3-8344-4599 http://www.ms.unimelb.edu.au/~andrewpr http://blogs.mbs.edu/fishing-in-the-bay/