Skip to content
Prev 16037 / 20628 Next

Mixed Models in a very basic replication design

Tim,

There may be many reasons that different pots would respond to treatment uniquely. 

If you plant a number of seeds, will you have 100% germination in each pot? Could differences in percent germination (say, root density) affect soil responses - both in initial measurements (independent intercept) or response over time (independent slope)?

If you have one plant per plot, can you be sure all plants are genetically or physiologically identically, and respond the same to treatments over time? Or do you need to account for the fact that you?re drawing seed from a random population? I had a problem like this years ago, doing a field trial with a soybean variety that was still a segregating population.

How you code ?Pot? depends on how you execute the experiment. If you have 2 levels of A and 2 levels of B, replicated 5 times, then you should have 2*2*5=20  pots, assume the 2 time measurements are take twice from the same pot. Then you could write an ANOVA model of the form

aov( ~  Treatment A * Treatment B * Time + Replicate + Error(Treatment A : Treatment B : Replicate))

 since the combination of A,B and Rep uniquely identify Pot. Otherwise, you could assign a id to each pot. 

Myself, I would run something like

aov( ~  Treatment A * Treatment B * Time + Replicate +  Replicate:Treatment + A:Treatment B)

If (Treatment A * Treatment B * Replicate) not significant, relative to EMS, then I would recalculate using 

aov( ~  Treatment A * Treatment B * Time)

It?s not unreasonable for Replicate effects to be 0, and this analysis would give you a bit more error df and a non-significant test suggests that Treatment A:Treatment B:Replicate MS == EMS, so there is only one error term needed for treatment comparisons.

If (Treatment A * Treatment B * Replicate) is significant, you can?t compare two pairs of comparisons taken from different pots (say, A1B1 at time 2 vs A1B2 at time 2) with the same error term that you use for two pairs of comparisons take from the same pot (A1B1 at time 1 vs A1B1 at time 2). That error is easier to get correct using lme or lmer and (1 | Replicate / Pot). I would run

aov( ~  Treatment A * Treatment B * Time + Replicate + Error(Treatment A : Treatment B : Replicate))

just to get a convenient set of F-test for A and B effects.

If (Treatment A : Treatment B : Time) interaction is significant, then you might consider comparing (1 | Replicate / Pot) vs (Time | Replicate / Pot). You could just code (Time | Pot) from the start, but I tend to be conservative when moving beyond a simple AOV. On the face of it, this appears to be a standard repeated-measures-as-split-plot design, so I would plan for an ANOVA, but allow for a mixed-model approach if the data warrant.

Working on a similar problem, I?m using something like

lme(assessment1 ~ Treatment A * Treatment B * Time, random= ~ 1 | Replicate / Pot)

as the default mixed model repeated-measure-as-split-plot analysis, then comparing to different correlated error models (i.e. correlation=corAR1()). With only 2 time points, you won?t need to worry about correlated errors, but it might be useful if you end up taking more measurements.

One caveat. If you end up with missing pots. then you should most certainly skip the AOV calculations and start with a mixed model.

Cheers,

Peter