Maarten,
Just to double-check if I get this right: the entries in each cell of the
table are the numbers by which the variance components are divided in the
equation of the noncentrality parameter. Is this correct?
Almost. They multiply the variance components, not divide them.
Essentially each row gives the weights of a weighted sum of variance
components. Then to translate that to what appears in the denominator of
the noncentrality parameter, the entire thing is divided by the total
sample size *and we remove the variance component for the effect in
question *(I forgot to mention that part in my last email).
For example, consider the simple design with random participants (P)
nested in fixed groups (G). So g is the number of groups, p is the number
of participants per group, and # is the number of replicates. (This is
design 2 in the dropdown menu of examples.) The EMS table shows that, for
the between-group effect, the coefficients for the error, participant, and
group variance components are, respectively, 1, #, and #p. So the expected
mean square is var_error + # * var_participants + # * p * var_groups. The
total sample size is pg#, so in the noncentrality parameter expression this
becomes sqrt(var_error / pg# + var_participants / pg). Note that this only
gives most of the denominator of the of the noncentrality parameter
expression -- it ignores the variance of the contrast weights -- you can
see more in the PANGEA working paper, linked in the app.
Jake
On Thu, Nov 29, 2018 at 7:36 AM Maarten Jung <
Maarten.Jung at mailbox.tu-dresden.de> wrote:
Hi Jake,
So, regarding this issue, there is no difference between taking out
variance components for main effects before interactions within the same
grouping factor, e.g. reducing (1 + A*B | subject) to (1 + A:B | subject),
and taking out the whole grouping factor "item" (i.e. all variance
components of it) before "subject:item"?
I think that if you have strong evidence that this is the appropriate
random effects structure, then it makes sense to modify your model
accordingly, yes.
This makes sense to me.
Do all variances of the random slopes (for interactions and main effects)
of a single grouping factor contribute to the standard errors of the fixed
main effects and interactions in the same way?
No -- in general, with unbalanced datasets and continuous predictors,
it's hard to say much for sure other than "no." But it can be informative
to think of simpler, approximately balanced ANOVA-like designs where it's
much easier to say much more about which variance components enter which
standard errors and how.
The standard error for a particular fixed effect is proportional to the
(square root of the) corresponding mean square divided by the total sample
size, that is, by the product of all the factor sample sizes. So examining
the mean square for an effect will tell you which variance components enter
its standard error and which sample sizes they are divided by in the
expression.
Your app is very useful, too. Just to double-check if I get this right:
the entries in each cell of the table are the numbers by which the variance
components are divided in the equation of the noncentrality parameter. Is
this correct?
Regards,
Maarten