I have often referred to the development version of the lme4 package,
called lme4a. At the risk of annoying people who don't want to hear
more about a package that they can't yet use, I provide this update.
The sources for lme4a are available from the SVN archive on R-forge
but binary packages are not. I hope that will change in the near
future.
I have switched to using the marvelous Rcpp package created by Dirk
Eddelbuettel and Romain Fran?ois, which I heartily recommend to those
writing C++/C code to be loaded into R. Recently Romain has been on a
"code rant" creating "syntactic sugar" that makes it much easier to
write expressions using R vectors in C++ and it has just been too
tempting for me to use these capabilities. That is why binary
packages are not available. Some of the code in the lme4a package
depends on the "Rcpp du jour", more or less, and doesn't build on
systems like win-builder or R-forge because of that dependency. When
Dirk and Romain are ready to release Rcpp_0.8.3 to CRAN we'll be able
to pursue making binary packages available.
Another change from lme4 to lme4a is the use of the bobyqa optimizer
from the minqa package, instead of the nlminb optimizer. Generally I
have been pleased with the results from bobyqa but I am always on the
lookout for good optimizers that will handle nonlinear objective
functions subject to box constraints on the parameters. The lme4a
code is constructed so that the user can create a function to evaluate
the deviance without doing the actual optimization to get the
parameter estimates. This allows for experimentation with other
optimizers. At this summer's useR! conference Stefan Theussl, Kurt
Hornik and David Meyer will talk about their R Optimization
Infrastructure package and I look forward to perhaps writing a generic
interface to several different optimizers through that. (Note to
Stefan et al: and I would also like to write the interface glue for
the optimizers in the minqa package for ROI, once you document what
must be written.)
Generally I am pleased with both the quality of the results and the
speed of the package. For glmer and nlmer there are two optimizations
- the first involving only the variance-component parameters and the
second involving the variance component parameters and the fixed
effects. A value of 0 for the optional argument nAGQ suppresses the
second optimization, which can take much longer than the first. In
many cases the second optimization doesn't improve the result much but
I have seen cases where the result from the second optimization is
considerably better than that from the first. (It should always be at
least as good as the first because the converged values from the first
optimization are used as the starting values for the second.) I
enclose an example where there is a big difference. This is a slight
modification of the R code in Doran, Bates, Bliese and Dowling
(http://www.jstatsoft.org/v20/i02). The good news is that the results
from these model fits are better than the results quoted in that paper
(the bad news is that we should now post a correction).
As I mentioned in a thread started by Dave Atkins, the optional
argument nAGQ to glmer and nlmer can be given the value 0, in which
case a faster algorithm that iterates over the variance-component
parameters only is used.
-------------- next part --------------
R version 2.11.1 (2010-05-31)
Copyright (C) 2010 The R Foundation for Statistical Computing
ISBN 3-900051-07-0
R is free software and comes with ABSOLUTELY NO WARRANTY.
You are welcome to redistribute it under certain conditions.
Type 'license()' or 'licence()' for distribution details.
Natural language support but running in an English locale
R is a collaborative project with many contributors.
Type 'contributors()' for more information and
'citation()' on how to cite R or R packages in publications.
Type 'demo()' for some demos, 'help()' for on-line help, or
'help.start()' for an HTML browser interface to help.
Type 'q()' to quit R.
###################################################
### chunk number 1: preliminaries
###################################################
library("lme4a")
Loading required package: Matrix
Loading required package: lattice
Attaching package: 'Matrix'
The following object(s) are masked from 'package:base':
det
Loading required package: minqa
Loading required package: Rcpp
Attaching package: 'lme4a'
The following object(s) are masked from 'package:stats':
AIC
###################################################
### chunk number 2: conversion
###################################################
data("lq2002", package = "multilevel")
wrk <- lq2002
for (i in 3:16) wrk[[i]] <- ordered(wrk[[i]])
for (i in 17:21) wrk[[i]] <- ordered(5 - wrk[[i]])
lql <- within(reshape(wrk, varying = list(names(lq2002)[3:21]), v.names = "fivelev",
At risk of annoying people who don't want to hear about a package they can't
use in order to help some other people be ABLE to use the package, for those
who may be new to R-Forge and have trouble getting ahold of lme4a, here is
the method:
svn checkout svn://svn.r-forge.r-project.org/svnroot/lme4
...lme4a is part of the r-forge lme4 package, so if you try the standard
means of installing r-forge packages (e.g.,
install.packages("lme4a",repos="http://r-forge.r-project.org") or the above
but using lme4a instead of lme4, or clicking on the "download lme4a source"
link from the lme4 r-forge group), you will fail.
Hope this helps others suffer less.
--Adam
On Fri, 25 Jun 2010, Douglas Bates wrote:
I have often referred to the development version of the lme4 package,
called lme4a. At the risk of annoying people who don't want to hear
more about a package that they can't yet use, I provide this update.
The sources for lme4a are available from the SVN archive on R-forge
but binary packages are not. I hope that will change in the near
future.
I have switched to using the marvelous Rcpp package created by Dirk
Eddelbuettel and Romain Fran?ois, which I heartily recommend to those
writing C++/C code to be loaded into R. Recently Romain has been on a
"code rant" creating "syntactic sugar" that makes it much easier to
write expressions using R vectors in C++ and it has just been too
tempting for me to use these capabilities. That is why binary
packages are not available. Some of the code in the lme4a package
depends on the "Rcpp du jour", more or less, and doesn't build on
systems like win-builder or R-forge because of that dependency. When
Dirk and Romain are ready to release Rcpp_0.8.3 to CRAN we'll be able
to pursue making binary packages available.
Another change from lme4 to lme4a is the use of the bobyqa optimizer
from the minqa package, instead of the nlminb optimizer. Generally I
have been pleased with the results from bobyqa but I am always on the
lookout for good optimizers that will handle nonlinear objective
functions subject to box constraints on the parameters. The lme4a
code is constructed so that the user can create a function to evaluate
the deviance without doing the actual optimization to get the
parameter estimates. This allows for experimentation with other
optimizers. At this summer's useR! conference Stefan Theussl, Kurt
Hornik and David Meyer will talk about their R Optimization
Infrastructure package and I look forward to perhaps writing a generic
interface to several different optimizers through that. (Note to
Stefan et al: and I would also like to write the interface glue for
the optimizers in the minqa package for ROI, once you document what
must be written.)
Generally I am pleased with both the quality of the results and the
speed of the package. For glmer and nlmer there are two optimizations
- the first involving only the variance-component parameters and the
second involving the variance component parameters and the fixed
effects. A value of 0 for the optional argument nAGQ suppresses the
second optimization, which can take much longer than the first. In
many cases the second optimization doesn't improve the result much but
I have seen cases where the result from the second optimization is
considerably better than that from the first. (It should always be at
least as good as the first because the converged values from the first
optimization are used as the starting values for the second.) I
enclose an example where there is a big difference. This is a slight
modification of the R code in Doran, Bates, Bliese and Dowling
(http://www.jstatsoft.org/v20/i02). The good news is that the results
from these model fits are better than the results quoted in that paper
(the bad news is that we should now post a correction).
As I mentioned in a thread started by Dave Atkins, the optional
argument nAGQ to glmer and nlmer can be given the value 0, in which
case a faster algorithm that iterates over the variance-component
parameters only is used.