Hi Sacha,
Correct me if I'm wrong, but I tend to think this is more like a
sensitivity analysis (given alpha, power, and N, solve for the required
effect size). If the minimum detectable effect size at 80% power ends up so
large that it exceeds the typical range in the field (say, a .6
correlation is the minimum whereas a .2 is typically expected), then we may
say the study is underpowered. So I think I made a mistake with question
(2) - the MDES should be compared to an effect size with practical
importance, not the observed effect size.
Han
On Sat, Jul 4, 2020 at 12:07 PM varin sacha <varinsacha at yahoo.fr> wrote:
Hi,
Is the question about post hoc power analysis ?
Post hoc power analyses are usually not suggested. (See for example The
abuse of power...hoenig & heisey).
You should do an a priori power analysis. If you then do the small
study and obtain a negative result, you have no idea why ? you are stuck.
That is why I always tell people not to do a study where everything rides
on a significant result. It is an unnecessary gamble.
It is always better to realize an a priori power analysis to know Type II
error and the power in case of the test is not significant.
Also, it is very easy to, a priori, estimate the power of say, a medium,
effect size. So there is little reason for not doing that at the
Best,
Sacha
Envoy? de mon iPhone
Le 4 juil. 2020 ? 01:04, Patrick (Malone Quantitative) <
malone at malonequantitative.com> a ?crit :
?No, because I don't think it can be. That's not how power analysis
On Fri, Jul 3, 2020, 6:42 PM Han Zhang <hanzh at umich.edu> wrote:
Hi Pat,
Thanks for your quick reply. Yes, I already have the data and the
effects, and the analysis was suggested by a reviewer. Can you
when do you think such an analysis might be justified?
Thanks!
Han
On Fri, Jul 3, 2020 at 6:34 PM Patrick (Malone Quantitative) <
malone at malonequantitative.com> wrote:
Han,
(1) Usually, yes, but . . .
(2) If you have an actual effect, does that mean you're doing post
power analysis? If so, that's a whole can of worms, for which the
advice I have is "don't do it." Use the size of the confidence
interval of your estimate as an assessment of sample adequacy.
Pat
On Fri, Jul 3, 2020 at 6:27 PM Han Zhang <hanzh at umich.edu> wrote:
Hello,
I'm trying to find the minimum detectable effect size (MDES) given
sample, alpha (.05), and desired power (90%) in a linear mixed model
setting. I'm using the simr package for a simulation-based approach.
did is changing the original effect size to a series of hypothetical
sizes and find the minimum effect size that has a 90% chance of
significant result. Below is a toy code:
library(lmerTest)
library(simr)
# fit the model
model <- lmer(Reaction ~ Days + (Days | Subject), sleepstudy)
summary(model)
Fixed effects:
Estimate Std. Error df t value Pr(>|t|)
(Intercept) 251.405 6.825 17.000 36.838 < 2e-16 ***
Days 10.467 1.546 17.000 6.771 3.26e-06 ***
Here is the code for minimum detectable effect size:
pwr <- NA
# define a set of reasonable effect sizes
es <- seq(0, 10, 2)
# loop through the effect sizes
for (i in 1:length(es)) {
# replace the original effect size with new one
fixef(model)['Days'] = es[i]
# run simulation to obtain power estimate
pwr.summary <- summary(powerSim(
model,
test = fixed('Days', "t"),
nsim = 100,
progress = T
))
# store output
pwr[i] <- as.numeric(pwr.summary)[3]
}
# display results
cbind("Coefficient" = es,
Power = pwr)
Output:
Coefficient Power
[1,] 0 0.09
[2,] 2 0.24
[3,] 4 0.60
[4,] 6 0.99
[5,] 8 1.00
[6,] 10 1.00
My questions:
(1) Is this the right way to find the MDES?
(2) I have some trouble making sense of the output. Can I say the
following: because the estimated power when the effect = 6 is 99%,
because the actual model has an estimate of 10.47, then the study is
sufficiently powered? Conversely, imagine that if the actual
3.0, then can I say the study is insufficiently powered?
Thank you,
Han
--
Han Zhang, Ph.D.
Department of Psychology
University of Michigan, Ann Arbor
https://sites.lsa.umich.edu/hanzh/
[[alternative HTML version deleted]]