Skip to content

within group variance of the coeficients in LME

3 messages · Harold Doran, J.R. Lockwood, Douglas Bates

#
lme does not produce standard errors for the variance components like HLM does. It does produce SEs for the fixed effects, however, along with t-statistics and p-values, just like HLM. Use the summary() command to see these. 
 
When you do this, you will get the AIC, BIC, and loglik values. Just below this output will be the variance components for the random effects. But, the level 2 variance components are reported as standard deviations and SEs do not accompany these random effects.
 
In lme,  the residual is the within-group error, which is the sigma-squared in HLM.
 
In terms of lme, the plot(intervals) can be used to assess variability rather than the chi-square in HLM. 
 
 

	-----Original Message----- 
	From: Andrej Kveder [mailto:andrejk at zrc-sazu.si] 
	Sent: Wed 6/25/2003 5:24 PM 
	To: R-Help 
	Cc: 
	Subject: [R] within group variance of the coeficients in LME
	
	

	Dear listers, 

	I can't find the variance or se of the coefficients in a multilevel model 
	using lme. 

	I want to calculate a Chi square test statistics for the variability of the 
	coefficients across levels. I have a simple 2-level problem, where I want to 
	check weather a certain covariate varies across level 2 units. Pinheiro 
	Bates suggest just looking at the intervals or doing a rather conservative 
	ANOVA test in their book. I have also consultet Raudenbush Bryk on the 
	subject and they suggest using a Chi sqare statistics. It is defined as 
	follows: 

	SUM by j( (beta_hat_qj - y_hat_q0 - sum(y_hat_qs*w_sj))^2/V_hat_qqj) 

	beta are the within 2-level coffecients - got them with the coef() 
	y is a fixed effect or grand mean 
	the sum is for accounting of the level 2 predictors, that I don't presently 
	have, but will 
	the problem is V_hat_qqj which are the variances of the coefficients. 

	I can't get to them. Does anybody have an idea how to get to them? I would 
	really appreciate any suggestion. 

	Andrej 

	_________ 
	Andrej Kveder, M.A. 
	researcher 
	Institute of Medical Sciences SRS SASA; Novi trg 2, SI-1000 Ljubljana, 
	Slovenia 
	phone: +386 1 47 06 440   fax: +386 1 42 61 493 


	______________________________________________ 
	R-help at stat.math.ethz.ch mailing list 
	https://www.stat.math.ethz.ch/mailman/listinfo/r-help
#
The component of an lme() object called "apVar" provides the estimated
asymptotic covariance matrix of a particular transformation of the
variance components. Dr. Bates can correct me if I'm wrong but I
believe it is the matrix logarithm of Cholesky decomposition of the
covariance matrix of the random effects.  I believe the details are in
the book by Pinheiro and Bates.  Once you know the transformation you
can use the "apVar" elements to get estimated asympotic standard
errors for your variance components estimates using the delta method.

J.R. Lockwood
412-683-2300 x4941
lockwood at rand.org
http://www.rand.org/methodology/stat/members/lockwood/
4 days later
#
"J.R. Lockwood" <lockwood at rand.org> writes:
First, thanks to those who answered the question.  I have been away
from my email for about a week and am just now catching up on the
r-help list.

As I understand the original question from Andrej he wants to obtain
the standard errors for coefficients in the fixed effects part of the
model.  Those are calculated in the summary method for lme objects and
returned as the component called 'tTable'.  Try

library(nlme)
example(lme)
summary(fm2)$tTable

to see the raw values.

Other software for fitting mixed-effects models, such as SAS PROC
MIXED and HLM, return standard errors along with the estimates of the
variances and covariances of the random effects.  We don't return
standard errors of estimated variances because we don't think they are
useful.  A standard error for a parameter estimate is most useful when
the distribution of the estimator is approximately symmetric, and
these are not.

Instead we feel that the variances and covariances should be converted
to an unconstrained scale, and preferably a scale for which the
log-likelihood is approximately quadratic.  The apVar component that
you mention is an approximate variance-covariance matrix of the
variance components on an unbounded parameterization that uses the
logarithm of any standard deviation and Fisher's z transformation of
any correlations.  If all variance-covariance matrices being estimated
are 1x1 or 2x2 then this parameterization is both unbounded and
unconstrained.  If any are 3x3 or larger then this parameterization
must be further constrained to ensure positive definiteness.
Nevertheless, once we have finished the optimization we convert to
this 'natural' parameterization to assess the variability of the
estimates because these parameters are easily interpreted.

The actual optimization of the profiled log-likelihood is done using
the log-Cholesky parameterization that you mentioned because it is
always unbounded and unconstrained.  Interpreting elements of this
parameter vector is complicated.

I hope this isn't too confusing.