Skip to content
Prev 6453 / 20628 Next

Resume terminated lmer fit if verbose=TRUE?

The data come from a dozen or so published experiments using the
Attention Network Test (ANT), which is a video-game like test of human
attention. The data contain about 1000 participants, each observed in
a series of 288 "trials" that last only a couple seconds and terminate
when the participant responds to a target. We record response time and
accuracy of these responses and analyze these variables as a function
of the characteristics of the trial. Trials are characterized by the
kind of cue that preceded the target (4 kinds), the location of the
target (2 locations), the identity of the target (2 identities), and
the kind of distractor items presented concurrently with the target (3
kinds). My colleagues and I are furthermore interested in how the
features of trial N affect performance on trial N+1. For the model of
accuracy, we therefore have:

acc ~ (1|participant) +
cue*previous_cue*distractor*previous_distractor*target_location_repeat*target_identity_repeat

In theory, I could drop the "cue" and "previous_cue" variables because
my colleagues haven't generated predictions for the influence of those
variables. (There's also a 7th variable that distinguishes a slight
methodological difference between some experiments, hence my mention
of 7 fixed effects variables before)

When the model fitting completes, I aim to generate a posteriori
samples from the model (as implemented in the dev version of
ezPredict: https://github.com/mike-lawrence/ez/blob/master/R/ezPredict.R)
with which my colleagues will investigate a series of comparisons they
have highlighted that distinguish competing theories that aim to
account for performance in this task. (To facilitate such comparisons,
the output of ezPredict is formatted to match the input format
required by ezBootPlot:
https://github.com/mike-lawrence/ez/blob/master/R/ezBootPlot.R).

Indeed, this has already been achieved for the response time data, and
using the a posteriori data one of my colleagues has rather elegantly
proposed a handful of "rules" that appear to account for a large
number of phenomena in the data. After checking that the accuracy data
don't diverge from this account, my next step is to compute likelihood
ratios for each rule (as well as their combination) to quantify the
degree to which they account for the data.

Mike
On Wed, Jul 27, 2011 at 9:05 PM, Dennis Murphy <djmuser at gmail.com> wrote: