Skip to content

Minimum detectable effect size in linear mixed model

6 messages · Patrick (Malone Quantitative), varin sacha, Han Zhang +2 more

#
No, because I don't think it can be. That's not how power analysis works.
It's bad practice.
On Fri, Jul 3, 2020, 6:42 PM Han Zhang <hanzh at umich.edu> wrote:

            

  
  
#
Hi,

Is the question about post hoc power analysis ?

Post hoc power analyses are usually not suggested. (See for example The abuse of power...hoenig & heisey).  
You should do an a priori power analysis.  If you then do the small sample study and obtain a negative result, you have no idea why ? you are stuck.
 
That is why I always tell people not to do a study where everything rides on a significant result.  It is an unnecessary gamble. 

It is always better to realize an a priori power analysis to know Type II error and the power in case of the test is not significant.

Also, it is very easy to, a priori, estimate the power of say, a medium, effect size.  So there is little reason for not doing that at the beginning.

Best,
Sacha 

Envoy? de mon iPhone
#
Hi Sacha,

Correct me if I'm wrong, but I tend to think this is more like a
sensitivity analysis (given alpha, power, and N, solve for the required
effect size). If the minimum detectable effect size at 80% power ends up so
large that it exceeds the typical range in the field (say,  a .6
correlation is the minimum whereas a .2 is typically expected), then we may
say the study is underpowered. So I think I made a mistake with question
(2) - the MDES should be compared to an effect size with practical
importance, not the observed effect size.

Han
On Sat, Jul 4, 2020 at 12:07 PM varin sacha <varinsacha at yahoo.fr> wrote:

            

  
    
#
Dear Han,

As mentioned earlier, power analysis is only relevant _before_ you do the
study. To avoid running an underpowered study. Doing a post-hoc power
analysis on an underpowered study is putting the cart before the horse.

Once you have done the analysis, look at the confidence intervals of the
estimates.
- non-significant and values in the CI small compared to the
practical range: sufficient power
- non-significant and values in the CI similar or larger than the
practical range: underpowered
- significant and values in the CI similar or larger than the practical
range: sufficient power
- significant and values in the CI small compared to the practical range:
overpowered

Note that you should not only vary the coefficient of interest. At least
also take the uncertainty of the random effect variance into account. Don't
underestimate its effect on the power. The uncertainty on these variances
can be substantial. Especially when the design has a small (<200) number of
levels for the random effect.

Best regards,

ir. Thierry Onkelinx
Statisticus / Statistician

Vlaamse Overheid / Government of Flanders
INSTITUUT VOOR NATUUR- EN BOSONDERZOEK / RESEARCH INSTITUTE FOR NATURE AND
FOREST
Team Biometrie & Kwaliteitszorg / Team Biometrics & Quality Assurance
thierry.onkelinx at inbo.be
Havenlaan 88 bus 73, 1000 Brussel
www.inbo.be

///////////////////////////////////////////////////////////////////////////////////////////
To call in the statistician after the experiment is done may be no more
than asking him to perform a post-mortem examination: he may be able to say
what the experiment died of. ~ Sir Ronald Aylmer Fisher
The plural of anecdote is not data. ~ Roger Brinner
The combination of some data and an aching desire for an answer does not
ensure that a reasonable answer can be extracted from a given body of data.
~ John Tukey
///////////////////////////////////////////////////////////////////////////////////////////


<https://www.inbo.be>


Op za 4 jul. 2020 om 23:27 schreef Han Zhang <hanzh at umich.edu>:

  
  
1 day later
#
Dear Han,

I agree with your interpretation of a sensitivity analysis that shows a correlation of .6 would be needed to have the desired power in a situation where .2 would be typical. To achieve desired sensitivity, we could increase sample size, or increase alpha (i.e., go to .10 instead of .05), or we could reduce our desired power (maybe be satisfied with .80 or less instead of .90 or .95), or we could try to increase the effect size, perhaps by using better measures or a more intense treatment. 

If we wish to determine an appropriate sample size, we specify alpha, power, and the effect size. Setting the effect size is tricky because we don't know the actual effect. A logical approach is to set the effect size at the smallest value that is considered to be important. If the effect size is larger, we will have even more power. If the effect size is smaller, we don't care much if the result is not statistically significant. 

I used the acronym BEAN to help people remember the four features that are involved with power analysis. 

B = beta error, where power = (1 ? Beta error)
E = effect size
A = alpha error rate
N = sample size

If you know any three, you can compute the fourth. 

Best 
Sacha

Envoy? de mon iPhone

  
  
#
Power analysis is prospective, never retrospective.? You already know the results.
Steve Denham Senior Biostatistical Scientist, Charles River Laboratories
On Friday, July 3, 2020, 07:04:00 PM EDT, Patrick (Malone Quantitative) <malone at malonequantitative.com> wrote:
No, because I don't think it can be. That's not how power analysis works.
It's bad practice.
On Fri, Jul 3, 2020, 6:42 PM Han Zhang <hanzh at umich.edu> wrote:

            
??? [[alternative HTML version deleted]]

_______________________________________________
R-sig-mixed-models at r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models