Message-ID: <CAHr4DycgwzdnKXQSp43pJJJrkiDrELF2PVTeZLCZzizOOj1DiQ@mail.gmail.com>
Date: 2018-11-28T20:33:28Z
From: Maarten Jung
Subject: LMM reduction following marginality taking out "item" before "subject:item" grouping factor
In-Reply-To: <CAE9_Wg63ZGgR1ZGut5N5qY4qKexgVc8ADTMDoJt1BtAaDodvGA@mail.gmail.com>
Jake,
thanks for this insight.
So, regarding this issue, there is no difference between taking out
variance components for main effects before interactions within the same
grouping factor, e.g. reducing (1 + A*B | subject) to (1 + A:B | subject),
and taking out the whole grouping factor "item" (i.e. all variance
components of it) before "subject:item"?
And, I would be glad if you could answer this related question:
Do all variances of the random slopes (for interactions and main effects)
of a single grouping factor contribute to the standard errors of the fixed
main effects and interactions in the same way?
Regards,
Maarten
On Wed, Nov 28, 2018, 20:03 Jake Westfall <jake.a.westfall at gmail.com wrote:
> Maarten,
>
> No, I would not agree that the Bates quote is referring to the principle
> of marginality in the sense of e.g.:
> https://en.wikipedia.org/wiki/Principle_of_marginality
>
> Bates can chip in if he wants, but as I see it, the quote doesn't hint at
> anything like this. It simply says that "variance components of
> higher-order interactions should generally be taken out of the model before
> lower-order terms nested under them" -- which I agree with. The reason this
> is _generally_ true is because hierarchical ordering is _generally_ true.
> But it looks like it's not true in your particular case.
>
> can you think of a reason why they suggest to follow this principle other
>> than "higher-order interactions tend to explain less variance than
>> lower-order interations"?
>
>
> No.
>
> Jake
>
> On Wed, Nov 28, 2018 at 12:53 PM Maarten Jung <
> Maarten.Jung at mailbox.tu-dresden.de> wrote:
>
>> Hi Jake,
>>
>> Thanks for your thoughts on this.
>>
>> I thought that Bates et al. (2015; [1]) were referring to this principle
>> when they stated:
>> "[...] we can eliminate variance components from the LMM, following the
>> standard statistical principle with respect to interactions and main
>> effects: variance components of higher-order
>> interactions should generally be taken out of the model before
>> lower-order terms nested under them. Frequently, in the end, this leads
>> also to the elimination of variance
>> components of main effects." (p. 6)
>>
>> Would you agree with me that this is referring to the principle of
>> marginality? And if so, can you think of a reason why they suggest to
>> follow this principle other than "higher-order interactions tend to explain
>> less variance than lower-order interations"?
>>
>> Best regards,
>> Maarten
>>
>> [1] https://arxiv.org/pdf/1506.04967v1.pdf
>>
>> On Wed, Nov 28, 2018 at 7:24 PM Jake Westfall <jake.a.westfall at gmail.com>
>> wrote:
>>
>>> Maarten,
>>>
>>> I think it's fine. I can't think of any reason to respect a principle of
>>> marginality for the random variance components. I agree with the feeling
>>> that it's better to remove higher-order interactions before lower-order
>>> interactions and so on, but that's just because of hierarchical ordering
>>> (higher-order interactions tend to explain less variance than lower-order
>>> interations), not because of any consideration of marginality. If in your
>>> data you find that hierarchical ordering is not quite true and instead the
>>> highest-order interaction is important while a lower-order one is not, then
>>> it makes sense to me to let your model reflect that finding.
>>>
>>> Jake
>>>
>>> On Wed, Nov 28, 2018 at 12:18 PM Maarten Jung <
>>> Maarten.Jung at mailbox.tu-dresden.de> wrote:
>>>
>>>> Dear list,
>>>>
>>>> In a 2 x 2 fully crossed design in which every participant responds to
>>>> every stimulus multiple times in each cell of the factorial design the
>>>> maximal linear mixed model justified by the design (using the lme4
>>>> syntax)
>>>> should be:
>>>> y ~ A * B + (1 + A * B | subject) + (1 + A * B | item) + (1 + A * B |
>>>> subject:item)
>>>>
>>>> Within a model reduction process, be it because the estimation algorithm
>>>> doesn't converge or the model is overparameterized or one wants to
>>>> balance
>>>> Type-1 error rate and power, I follow the principle of marginality
>>>> taking
>>>> out higher-order interactions before lower-order terms (i.e. lower-order
>>>> interactions and main effects) nested under them and random slopes
>>>> before
>>>> random intercepts.
>>>> However, it occurs that the variance components of the grouping factor
>>>> "item" are not significant while those of the grouping factor
>>>> "subject:item" are.
>>>>
>>>> Does it make sense to remove the whole grouping factor "item" before
>>>> taking
>>>> out the variance components of the grouping factor "subejct:item"?
>>>>
>>>> A reduced model would f.i. look like this:
>>>> y ~ A * B + (1 + A | subject) + (1 | subject:item)
>>>>
>>>> I'm not sure whether this contradicts the principal of marginality and,
>>>> in
>>>> general, whether this is a sound approach.
>>>>
>>>> Any help is highly appreciated.
>>>>
>>>> Best regards,
>>>> Maarten
>>>>
>>>> [[alternative HTML version deleted]]
>>>>
>>>> _______________________________________________
>>>> R-sig-mixed-models at r-project.org mailing list
>>>> https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models
>>>>
>>>
[[alternative HTML version deleted]]