Skip to content

repeated measure in partially crossed design

5 messages · Paul Miller, Ben Bolker, matteo dossena

#
matteo dossena <m.dossena at ...> writes:
If you really want to "... assess[] the effect of treatment, season
and their interaction on the relationship between the two variables",
you may want treatment*season*V2 as fixed effect (so you can tell whether 
the V1~V2 relationship changes with treatment and season)

  Having any *factor* included as both a fixed effect and a random
effect will cause trouble, e.g. in your model (2).  (On the other
hand, it does sometimes make sense to include a _continuous_ predictor
as both fixed (which will estimate a linear trend) and random (which
will consider variation around the linear trend -- this only makes
sense if you have multiple measurements per value of the predictor,
though.  Another apparent exception to this is subject in the
(1|treatment/subject) term, which is only included as subject nested
within treatment.
Here both treatment and season are included as both fixed and random --
probably not a good idea.
Still probably don't want season and (1|season)
This is not unreasonable.  You could consider (season|subject),
or (1|subject)+(0+season|subject) [which fits the intercept and slope
independently], since you have both seasons assessed for each individual.

   This gets raised a lot on this list, but: I would generally only
drop a random effect from the model if it actually appears overfitted
(i.e.  estimated as zeros, or a perfect +1/-1 correlation between
random effects), and not if it is merely non-significant (Hurlbert
calls this "sacrificial pseudoreplication").  I've been very impressed
by the results from the blme package, which incorporates a weak
Bayesian prior to push underdetermined variance components away from
zero ...
#
Really appreciate Ben,

this really make things clearer now, seems like (season|subject), could be the appropriate structure.

However, a last doubt still trouble me.

Having (season|subject) fitted as random effect, is it taking in consideration pseudoreplication (repeated measures on subject)?
If I would do this analysis with lme() I would fit a model with the argument correlation=CorCompSymm(form=~1|subject),
and a model without correlation than compared the two to assess wether or not  there is violation of the independence.
Is this a sensible things to do?

Since i'm working with lmer(), how can I check if correlation has to be included in the model?

Cheers
m.

Il giorno 1 Feb 2012, alle ore 02:15, Ben Bolker ha scritto:
#
Hello Everyone,

I'm familiar with the use of Two-Stage Least Squares Analysis to obtain results like one gets with SEM. I was wondering if anyone knows how to extend this approach to nested data. My data contain multiple observations from cancer patients. The number of observations varies by patient and the intervals between observations are not equally spaced.

Is it possible to apply a 2SLS approach to my data? If so, are there measures of model fit from the standard 2SLS approach that will work if I'm using a mixed model to account for non-independence of observations in my data? Or are there maybe some other measures I could use?

Thanks,

Paul
6 days later
#
matteo dossena <m.dossena at ...> writes:
Yes.
I have to admit I don't quite understand why people fit
CorCompSymm models so much since they are *almost* equivalent to
just including a random effect of the form ~1|subject (with the
difference, I guess, that negative within-cluster correlations
are possible, while random=~1|subject enforces positive correlations).
This is partly a philosophical question.  I would say that if
subject blocking is part of your experimental/sampling design then
you should include it in the model in any case, unless it causes
severe technical difficulties with the fitting.

   CorCompSymm is not a possibility in lmer.  In principle 
you can do a likelihood ratio test, but lmer won't fit models
without any random effects.  You could try the RLRsim package.
See also advice in <http://glmm.wikidot.com/faq> about how
(and whether) to test random effects.
[snip snip]
#
Thanks a lot Ben,

most probably, my doubt rises from a still superficial comprehension of the topic.
I guess, the correlation matrix is more important when is not simply symmetric and 
when the analysis is actually investigating the temporal dynamics.
I my case I'm interested in fitting a model that properly accounts for the experimental design.

Cheers
m.

Il giorno 7 Feb 2012, alle ore 16:56, Ben Bolker ha scritto: