Minimum detectable effect size in linear mixed model
Power analysis is prospective, never retrospective.? You already know the results. Steve Denham Senior Biostatistical Scientist, Charles River Laboratories
On Friday, July 3, 2020, 07:04:00 PM EDT, Patrick (Malone Quantitative) <malone at malonequantitative.com> wrote:
No, because I don't think it can be. That's not how power analysis works. It's bad practice.
On Fri, Jul 3, 2020, 6:42 PM Han Zhang <hanzh at umich.edu> wrote:
Hi Pat, Thanks for your quick reply. Yes, I already have the data and the actual effects, and the analysis was suggested by a reviewer. Can you elaborate on when do you think such an analysis might be justified? Thanks! Han On Fri, Jul 3, 2020 at 6:34 PM Patrick (Malone Quantitative) < malone at malonequantitative.com> wrote:
Han, (1) Usually, yes, but . . . (2) If you have an actual effect, does that mean you're doing post hoc power analysis? If so, that's a whole can of worms, for which the best advice I have is "don't do it." Use the size of the confidence interval of your estimate as an assessment of sample adequacy. Pat On Fri, Jul 3, 2020 at 6:27 PM Han Zhang <hanzh at umich.edu> wrote:
Hello, I'm trying to find the minimum detectable effect size (MDES) given my sample, alpha (.05), and desired power (90%) in a linear mixed model setting. I'm using the simr package for a simulation-based approach.
What I
did is changing the original effect size to a series of hypothetical
effect
sizes and find the minimum effect size that has a 90% chance of
producing a
significant result. Below is a toy code:
library(lmerTest)
library(simr)
# fit the model
model <- lmer(Reaction ~ Days + (Days | Subject), sleepstudy)
summary(model)
Fixed effects:
? ? ? ? ? ? Estimate Std. Error? ? ? df t value Pr(>|t|)
(Intercept)? 251.405? ? ? 6.825? 17.000? 36.838? < 2e-16 ***
Days? ? ? ? ? 10.467? ? ? 1.546? 17.000? 6.771 3.26e-06 ***
Here is the code for minimum detectable effect size:
pwr <- NA
# define a set of reasonable effect sizes
es <- seq(0, 10, 2)
# loop through the effect sizes
for (i in 1:length(es)) {
? # replace the original effect size with new one
? fixef(model)['Days'] =? es[i]
? # run simulation to obtain power estimate
? pwr.summary <- summary(powerSim(
? ? model,
? ? test = fixed('Days', "t"),
? ? nsim = 100,
? ? progress = T
? ))
? # store output
? pwr[i] <- as.numeric(pwr.summary)[3]
}
# display results
cbind("Coefficient" = es,
? ? ? Power = pwr)
Output:
? ? ? ? ? ? ? ? ? ? ? ? ? ? Coefficient? Power
[1,]? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? 0? 0.09
[2,]? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? 2? 0.24
[3,]? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? 4? 0.60
[4,]? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? 6? 0.99
[5,]? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? 8? 1.00
[6,]? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? 10? 1.00
My questions:
(1) Is this the right way to find the MDES?
(2) I have some trouble making sense of the output. Can I say the
following: because the estimated power when the effect = 6 is 99%, and
because the actual model has an estimate of 10.47, then the study is
sufficiently powered? Conversely, imagine that if the actual estimate
was
3.0, then can I say the study is insufficiently powered? Thank you, Han -- Han Zhang, Ph.D. Department of Psychology University of Michigan, Ann Arbor https://sites.lsa.umich.edu/hanzh/ ? ? ? ? [[alternative HTML version deleted]]
_______________________________________________ R-sig-mixed-models at r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models
-- Patrick S. Malone, Ph.D., Malone Quantitative NEW Service Models: http://malonequantitative.com He/Him/His
-- Han Zhang, Ph.D. Department of Psychology University of Michigan, Ann Arbor https://sites.lsa.umich.edu/hanzh/
??? [[alternative HTML version deleted]] _______________________________________________ R-sig-mixed-models at r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models