Hi Ben,
I'm sorry to dredge up this post again - I thought my reply had sent but I
checked and it was sat in my drafts folder.
My question was on the sum to zero contrasts you suggested. Is there a
recommended way to calculate these in R, or any references I can read about
them (I am a complete novice). I was looking into using anova or drop1 on
my models to get these "average" effects of the main effects and
interaction, however these two only seem to take into account fixed effects
in their calculations (do they use Wald statistics?) which seems pointless
to me. What is your view on these two? My design is a balanced one... I
know the glmm wiki recommends using a mcmc approach:
"Tests of effects (i.e. testing that several parameters are simultaneously
zero)
- Wald chi-square tests (e.g. car::Anova)
- Likelihood ratio test (via anova or drop1)
- *For balanced, nested LMMs* where df can be computed: conditional
F-tests
- *For LMMs*: conditional F-tests with df correction (e.g. Kenward-Roger
in pbkrtest package)
- MCMC or parametric, or nonparametric, bootstrap comparisons
(nonparametric bootstrapping must be implemented carefully to account for
grouping factors)"
I have done this via pvals.fnc(m1) in languageR but it still only gives the
a similar output to my original (with no way of knowing the average
effects). My concern is that the average effects seem to be what
supervisors/reviewers want reported i.e. the effect of Treatment1, rather
than the effect of Treatment1 level A compared to Treatment 1 level B
etc....
Any thoughts would be much appreciated! I'm finding it hard to find a
consensus anywhere. It is difficult to track down examples of reporting
these things - most focus seems to be on interpretation. Thank you once
again for your advice.
Sarah
On Mon, Sep 9, 2013 at 10:54 PM, Ben Bolker <bbolker at gmail.com> wrote:
Sarah Dryhurst <s.dryhurst at ...> writes:
Hi Ben,
Thank you for your reply! I don't really want to calculate the main
effects as it doesn't make much biological sense (to me!). I just
wasn't sure whether this was "required" in terms of statistical
reporting. That interaction effect is what I am interested in
largely, as it's the combined effect of the different treatments that
is my focus.
I think it would still be worth reporting the main effects, as
their size puts the size of the interaction in perspective (e.g. I
would generally like to be able to judge the size of the interaction
*relative to* the main effects, not just its magnitude/t statistic/
p value ...
With regards to the lack of variance at the Block level, would you
recommend dropping this level here? It doesn't seem to make too much
sense to keep it there...
Doesn't matter too much, since the results will be almost identical.
May be worth checking without, to double-check that the zero-variance
result hasn't thrown off the optimization, but I would personally
probably err on the side of reporting it (or say that it was in
the original model but estimated as being effectively zero).