Hi everyone,
I'm writing with a question about lmer( ) in the lme4 package. I've
searched around for answers and done quite a bit of experimentation
with toy data sets to figure out my issue, and I haven't been able to
resolve it.
I'm running linear mixed effects models on a large, sparse dataset in
which I'm regressing reaction time (a continuous variable) on several
categorical factors: Block (Block1/Block2/Block3), Group
(monolingual/bilingual), and Type (target/nontarget).
As a way of examining simple effects, I am dummy-coding specific
factors, setting each level of a given factor as the reference level
in turn. For example, I generate three models with each of the three
levels of Block coded as the reference level, without changing the
codings of the other factors:
## Model with Block 1 as reference level
contrasts(nback.low$Group) <- c(1, 0) # monoling ref
contrasts(nback.low$Type) <- c(1, -1)
contrasts(nback.low$Block) <- matrix(c(0, 1, 0, 0, 0, 1), ncol=2) #B1
ref, B2: 1, B3: 2
glmerNL.RS.SI_RTgxb1 <- glmer(WinRTs~(Group*Block*Type) +
(1+Block+Type|Subject) + (1+Group+Block|Item),data=nback.low)
## Model with Block 2 as reference level
contrasts(nback.low$Group) <- c(1, 0) # monoling ref
contrasts(nback.low$Type) <- c(1, -1)
contrasts(nback.low$Block) <- matrix(c(1, 0, 0, 0, 0, 1), ncol=2) #B2
ref, B1: 1, B3: 2
glmerNL.RS.SI_RTgxb2 <- glmer(WinRTs~(Group*Block*Type) +
(1+Block+Type|Subject) + (1+Group+Block|Item),data=nback.low)
## Model with Block 3 as reference level
contrasts(nback.low$Group) <- c(1, -1) # monoling 'ref'
contrasts(nback.low$Type) <- c(1, 0) # target ref
contrasts(nback.low$Block) <- matrix(c(1, 0, 0, 0, 1, 0), ncol=2) #B3
ref, B1: 1, B2: 2
glmerNL.RS.SI_RTbxt3 <- glmer(WinRTs~(Group*Block*Type) +
(1+Block+Type|Subject) + (1+Group+Block|Item),data=nback.low)
summary(glmerNL.RS.SI_RTbxt3)
The issue I'm having is that contrasts that I believe should be
identical are not. Below are summaries of the three models. You can
see that the estimate of the fixed effect of Block1 (the contrast
between Block 1 and Block 2) is -117.98 in the first model and 118.478
in the second model. To my understanding, they should be identical
except for the sign. Similar discrepancies can be seen in the other
Block contrasts.
There are two subjects who have no data at Block 1, so I removed them
and re-ran the models, but the same issue occurred. Separately, I
removed the random effects for Item, without removing those two
subjects, and when I did that the discrepancies disappeared. I have a
feeling this means that my models are too complex for my data, but I'm
not sure what I should look at to (dis)confirm this hunch or how
exactly to proceed if that is the case. (As an example of the
sparseness of the data, items are repeated across subjects, but each
subject has only one data point per item, or zero data points per item
for trials where they didn't respond correctly. However, I didn't get
any warnings about model convergence, or any warnings at
all.)
Any clues as to why I'm getting this results would be very much appreciated.
Thanks in advance,
Alan Mishler
Research Assistant
University of Maryland
--
## Model 1 output: Block 1 as reference level ##