Maarten,
Regarding whether it makes conceptual sense to have a model with random
slopes but not random intercepts. I believe the context of this
recommendation is an experiment where the goal is to do a confirmatory test
of whether the associated fixed slope = 0. In that case, as long as the
experiment is fairly balanced, the random slope variance appears in (and
expands) the standard error for the fixed effect of interest, while the
random intercept variance has little or no effect on the standard error
(again, assuming the experiment is close to balanced). So we'd like to keep
the random slopes in the model if possible so that the type 1 error rate
won't exceed the nominal alpha level by too much. But keeping the random
intercepts in the model is less important because it should have little or
no impact on the type 1 error rate either way, albeit it would be
conceptually strange to have random slopes but not random intercepts. So,
anyway, that's the line of thinking as I understand it, and I don't think
it's crazy.
Jake
On Wed, Aug 29, 2018 at 7:18 AM Maarten Jung <
Maarten.Jung at mailbox.tu-dresden.de> wrote:
Sorry, hit the send button too fast:
# here c1 and c2 represent the two contrasts/numeric covariates defined
for the three levels of a categorical predictor
m1 <- y ~ 1 + c1 + c2 + (1 + c1 + c2 || group)
On Wed, Aug 29, 2018 at 2:07 PM Maarten Jung <
Maarten.Jung at mailbox.tu-dresden.de> wrote:
On Wed, Aug 29, 2018 at 12:41 PM Phillip Alday <phillip.alday at mpi.nl>
wrote:
Focusing on just the last part of your question:
And, is there any difference between LMMs with categorical and LMMs
with continuous predictors regarding this?
Absolutely! Consider the trivial case of only one categorical
with dummy coding and no continuous predictors in a fixed-effect
Then ~ 0 + cat.pred and ~ 1 + cat.pred produce identical models in
sense, but in the former each level of the predictor is estimated as
"absolute" value, while in the latter, one predictor is coded as the
intercept and estimated as an "absolute" value, while the other
are coded as offsets from that value.
For a really interesting example, try this:
data(Oats,package="nlme")
summary(lm(yield ~ 1 + Variety,Oats))
summary(lm(yield ~ 0 + Variety,Oats))
Note that the residual error is identical, but all of the summary
statistics -- R2, F -- are different.
Sorry, I just realized that I didn't make clear what I was talking
I know that ~ 0 + cat.pred and ~ 1 + cat.pred in the fixed effects
are just reparameterizations of the same model.
As I'm working with afex::lmer_alt() which converts categorical
predictors to numeric covariates (via model.matrix()) per default, I
talking about removing random intercepts before removing random slopes
such a model, especially one without correlation parameters [e.g. m1],
and whether this is conceptually different from removing random
intercepts before removing random slopes in a LMM with continuous
predictors.
I. e., I would like to know if it makes sense in this case vs. doesn't
make sense in this case but does for continuous predictors vs. does
make sense.
# here c1 and c2 represent the two contrasts/numeric covariates
for the three levels of a categorical predictor
m1 <- y ~ 1 + c1 + c2 + (1 + c1 + c2 || cat.pred)
Best,
Maarten
[[alternative HTML version deleted]]