Skip to content

https://stats.stackexchange.com/questions/301763/using-random-effect-predictions-in-other-models?noredirect=1#comment573574_301763

4 messages · Houslay, Tom, Joshua Rosenberg, Jarrod Hadfield

#
Hi Josh,


It sounds like you would be better off using a bivariate model (I'm assuming this was where you were headed with 'combining' the models?), where your response variables are something like 'ESM' and 'Outcome'. You could have these grouped by individual ID, and (having controlled for any fixed effects on either response) calculate the among-individual variation in each response and the covariance between them (which can be scaled to a correlation). As you're effectively modelling the relationship between two response variables then I think this makes sense, and avoids discarding the error associated with predictions from a previous model.


If you have repeated measures for both ESM and Outcome, but these were not measured at the same time, you might have data something like the following:


ID --- Repeat --- ESM --- Outcome

A --- 1 --- 12 --- NA

A --- 2 --- 19 --- NA

A --- 3 --- 14 --- NA

A --- 1 --- NA --- 15

A --- 2 --- NA --- 9

B ...


etc


In which case you could calculate both among-individual and residual variation for both traits, but only the among-individual covariance (as you don't have observations of both responses at the same time to estimate the residual/'within-individual' covariance).


Note that if you only had a single observation for 'Outcome', you would simply constrain the residual variation for this trait to be 0 (such that all variation is modelled as 'among-individual').


In case they're useful, we have a brief paper related to this topic in Behavioural Ecology here:

https://doi.org/10.1093/beheco/arx023


...and some tutorials for these kinds of models in MCMCglmm / ASreml-R here:

https://tomhouslay.com/tutorials/


This paper on comparing behaviours measured repeatedly in both short- and long-term sampling regimes might also be of interest:

https://link.springer.com/article/10.1007/s00265-014-1692-0


Good luck!


Tom



Date: Wed, 4 Oct 2017 11:37:44 -0400
From: Joshua Rosenberg <jmichaelrosenberg at gmail.com>
To: r-sig-mixed-models <r-sig-mixed-models at r-project.org>
Subject: [R-sig-ME]
        https://stats.stackexchange.com/questions/301763/using-random-effect-predictions-in-other-models?noredirect=1#comment573574_301763

Message-ID:
        <CANYHYTTdKauDXjgzA-Ghy=dnEUfgJG+UPWMMM4UqtLXxR13k7g at mail.gmail.com>
Content-Type: text/plain; charset="UTF-8"

Hi R-Sig-mixed-models,

My question is about the use of predictions of effects for specific units
(such as an individual in the case of repeated measures data) in other
models. I'm especially interested in whether the group thinks this is a
good / useful approach for using repeated measures data to predict a
longer-term outcome. I am also interested in whether the group has any
suggestions for betters way to do this (or to combine what now requires two
models).

For example, were individual-level predicted effects to be obtained from a
mixed effects model (through a null model, i.e. there is random intercept
for individuals and no fixed effect), could they be used to predict an
individual-level outcome?

I am thinking about this specifically in the context of repeated measures
data (collected using Experience Sampling Method, or ESM, whereby students
are asked every so often to respond to questions about their interest and
engagement) and pre- and post-survey measures, representing a longer-term
outcome, students' self-reported interest in a STEM career.

Here is how I am thinking about this, using lmer() and lm() to specify the
models:

m1 <- lmer(repeated_measures_outcome ~ 1 + (1 | participant), data)


Process the data to obtain the predicted intercept for each participant.

m2 <- lm(longer_term_outcome ~ prior_level_of_longer_term_outcome +
predicted_intercept_for_participant)


When I have shared this idea with others (i.e., in this Stack Overflow
question) I have received feedback that a) yes, you can do this and b) you
could / should combine the two models (m1 and m2 in this example) into one
model. This would (obviously?) require using a different approach - but I
do not have a clear idea of what this would require (MCMCglmm? brms?).

Any general or specific advice is welcomed. Thank you for your
consideration of this. If I can or need to provide more detail or
background, then please do not hesitate to tell me so!

Josh

--
Joshua Rosenberg, Ph.D. Candidate
Educational Psychology
&
 Educational Technology
Michigan State University
http://jmichaelrosenberg.com
#
Tom, thank you so much, this is exactly the direction (I didn't know enough
in this case to call it a bivariate model) I was searching for. The
tutorials linked in your response are excellent and I've already started to
use one.

As an aside, I'm sorry for the subject of this post - I meant to insert the
link in the content of my message and have the subject of this message be
"Using random effect predictions in other models."

Josh
On Wed, Oct 4, 2017 at 1:45 PM Houslay, Tom <T.Houslay at exeter.ac.uk> wrote:

            
Joshua Rosenberg, Ph.D. Candidate
Educational Psychology ?&? Educational Technology
Michigan State University
http://jmichaelrosenberg.com
#
Hi Josh,

It sounds like one of your responses is repeat measure and the other 
not. With a bivariate model this means you want to model the covariance 
between the ID random effect for the repeat-measure trait and the 
residual for the single-measure trait (an ID term and  a residual would 
be non-identifiable for the latter). In MCMCglmm you can do this using 
the covu argument in the prior. An example is below.

Cheers,

Jarrod

   N<-500
    # 500 individuals

   V<-matrix(c(1,0.5, 0.5, 1),2,2)
    # 2x2 random/residual-effect covariance matrix

   Vr<-matrix(1,1,1)
    # residual variance for repeat measure trait

   u<-MASS::mvrnorm(N, rep(0, 2), V)
     # random effect for repeat measure trait followed by
     # residuals for the single measured trait.

   e<-rnorm(2*N,0,sqrt(Vr))
    # residuals for the repeat trait

   ysingle<-1+u[,2]
     # single measure traits has intercept of 1

   individual<-as.factor(rep(1:N, 3))
     # individuals are ordered within each trait/time combination

   type<-as.factor(c(rep("s", N), rep("r",2*N)))
     # designate which observations are single measures (s)
     # or repeat measures (r)

   yrep<--1+u[rep(1:N, 2),1]+e
     # the repeat measure trait has an intercept of -1

   dat1<-data.frame(y=c(ysingle,yrep), type=type,
   individual=individual)

   prior1<-list(R=list(R1=list(V=V, nu=0, covu=TRUE),
      R2=list(V=Vr, nu=0)))
     # use flat priors, but use covu=TRUE to mdoel covariances
     # between R1 effects and the random effects
     # (G is not needed because random effect prior is
     #  specified in R1)


   m.test1<-MCMCglmm(y~type-1,
     random=~us(at.level(type,"r")):individual,
     rcov=~us(at.level(type, "s")):individual+us(at.level(type, "r")):units,
     data=dat1,
     prior=prior1,
     verbose=FALSE)
     # Have to fit repeat measure residuals in the
     # second residual term.
On 04/10/2017 23:12, Joshua Rosenberg wrote:

  
    
#
Thanks for this Jarrod, I didn't know about this method (which seems more elegant than fixing variances in the priors) - I'll investigate and hopefully get to updating those tutorials in due course...