Second,
?why are you treating the observed data as a parameter estimate? Why don't
?actually estimate the model parameters (i.e., the item parameters), which
?asymptotically unbiased under certain estimation conditions. You can do
?ways in R, lme4 can do this using lmer as described here:
?http://www.jstatsoft.org/v20/i02
?Or you can use JML methods for Rasch in the MiscPsycho package or you can
?MML methods in the LTM package. What you seem to be doing is treating the
?as some kind of parameter for the item; but this is not reasonable I don't
?think.
fitting is typically done within each individual and condition of
interest separately, then the resulting parameters are submitted to 2
ANOVAs: one for bias, one for variability. I wonder if this analysis
might be achieved more efficiently using a single mixed effects model,
but I'm having trouble figuring out how to approach coding this.
?I'm not sure I can help you here as I am unclear on what you are doing
?exactly. Maybe if we elaborate a bit on what you are trying to do above, we
?can do this part next.
is an example of data similar to that collected in this sort of
research, where individuals fall into two groups (variable "group"),
and are tested under two conditions (variable "cue") across a set of
values from a continuous variable (variable "soa"), with each cue*soa
combination tested repeatedly within each individual. A model like
fit = lmer(
? ? formula = response ~ (1|id) + group*cue*soa
? ? , family = binomial( link='probit' )
? ? , data = a
)
employs the probit link, but of course yields estimates for the slope
and intercept of a linear model on the probit scale, and I'm not sure
how (if it's even possible) to convert the conclusions drawn on this
scale to conclusions about the bias and variability parameters of
interest.
Thoughts?