Skip to content

fixed effect testing again (but different)

2 messages · Daniel Ezra Johnson, Douglas Bates

#
Off list I was shown a way to supposedly do this, by making a pseudo
random slope using a dummy variable. It seems promising but it doesn't
really work properly, as the following example shows.

set.seed(1)
s <- rep(rnorm(200,0,1),each=100)
g <- rep(c(-3,3),each=10000)
p <- inv.logit(s+g)
obs <- data.frame(response=rbinom(20000,1,p),
	gender=rep(c("M","F"),each=10000),
	subject=rep(paste("S",1:200,sep=""),each=100))
obs$M <- ifelse(obs$gender=="M",1,0)
obs$F <- ifelse(obs$gender=="F",1,0)

test <- lmer(response~(0+M|subject)+(0+F|subject),obs,binomial)

Random effects:
 Groups  Name Variance Std.Dev.
 subject M     0.82563 0.90864       # out of whack
 subject F    37.79488 6.14775       # out of whack
Number of obs: 20000, groups: subject, 200

obs.m <- obs[obs$gender=="M",]
test.m <- lmer(response~(1|subject),obs.m,binomial)

Random effects:
 Groups  Name        Variance Std.Dev.
 subject (Intercept) 0.85413  0.9242
Number of obs: 10000, groups: subject, 100

obs.f <- obs[obs$gender=="F",]
test.f <- lmer(response~(1|subject),obs.f,binomial)

Random effects:
 Groups  Name        Variance Std.Dev.
 subject (Intercept) 0.60097  0.77522
Number of obs: 10000, groups: subject, 100

Is there, then, any way to implement heteroscedastic-by-group random
effects in lme4, as opposed to nlme?

Thanks,
D
#
On Fri, Aug 29, 2008 at 1:09 PM, Daniel Ezra Johnson
<danielezrajohnson at gmail.com> wrote:
At present, no - at least none that I know of (and I am usually
reasonably well informed regarding the capabilities of lme4).

I am currently working on abstracting the role of the parameters in
model-fitting within lme4 by redesigning the classes.  The current
design ties the parameters to a particular representation of the
relative covariance matrix of the random effects and it should be
generalized.  The trick is to generalize the approach without
sacrificing too much in the way of performance.