Skip to content
Prev 17381 / 20628 Next

Should I use full models when using Powersim?

Dear Chi,

Does your maximal model converge for your observed data? Are you able to
detect all assumed "true" effects (i.e. are all of your effects of
interest significant)?

If the answer is no: then your data are too noisy to give you a reliable
estimate of the effect and this can present itself as weird results in
simulation-based power analyses. Basically, if your estimates are noisy,
then your simulation will be noisy and your power estimates won't be
reliable. This also holds for power analysis without simulation, but
won't be as obvious: if you don't have a good estimate of your effect
size, then your power analysis will be misleading.

As far as computational time: use the model in your power analysis that
you want to use in your final analysis. After all, you want to estimate
how much power your final analysis has! There is a large amount of
literature debating tradeoffs in Type I & II error when using different
random-effects structures, and you haven't revealed anything else about
your data and experimental design, so there's little specific advice I
can give. However, in my experience, removing the interaction terms from
the random effects generally speeds up the computation a lot while not
really changing model fit. Since the interaction term is the effect you
care about, I would reparameterize the model so that interaction is a
main effect, e.g.

data$ab <- interaction(data$a, data$b)

fit <- glmer(B ~ a * ab + (1 + a + ab | Subject) ....

If your original model take an hour or more to compute,then it's no
surprise that 5000 simulations * 4 points on the power curve = 20 000
model fits take weeks!

Best,

Phillip


PS: Don't name your dataframe "data"! There is a built-in function with
that name in R and if you're not careful, you'll get all sorts of weird
erros.
On 16/2/19 2:52 pm, Chi Zhang wrote: