Power calculations for lmer mixed designs
Dear Phillip, thank you very much for your good answer - it helped me a lot. Now that I've been through it a few times, I'm still unclear about a few things:
- The qlogis function, I don't get what it does exactly and what to enter instead of 0.7 as in the example |beta <- c(qlogis(0.7), -0.2)|
See the paragraph above that code: "baseline range (fraction of grid cells sampled) is 70% (logit(0.7)=0.8473)". qlogis() is the quantile function for the logistic distribution, so qlogis returns the point in that distribution where the density is equal to the value mentioned. This is used here to compute logit(0.7):
qlogis(0.7)
[1] 0.8472979
Which is the value in the paragraph given for logit(0.7). This is just
converting effect size to the link (logit) scale (as mentioned in the
introduction: ' "beta" is the fixed-effects parameters, in this case
(intercept,treat) ? also all on the logit scale.').
Note that this is the intercept term ("baseline range").
My intercept and effect size is already on the logit scale - the intercept is 7.2. But whenever I enter it directly into the qlogis function, I'm warned that NaNs are produced. This happens whenever I enter anything over 1.0. Does this mean I actually need to enter a percentage rather than the absolute value...? Or is the problem an entirely different one?
- What exactly the resulting graphs mean, both here and in my own example. What does the read bar at -0.2 tell me?
Skimming the text, -0.2 is the coefficient/coefficient for the treatment (already on the link scale -- note that it is not transformed, so it represents log odds). In the explanatory text, "decrease log-odds of range by 0.4", i.e. -0.4 is used -- maybe Ben Bolker can explain why the number changed / what obvious transformation step I'm missing. So the redline represents the "ground truth", i.e. the real parameter value (because you chose it) and the other plots tell your simulated estimates -- how close they were (see esp. the CI plot).
So if I'm seeing this correctly, this way of calculating power is not a direct one, but you're rather meant to draw your own conclusions from looking at the graphs of the simulations and get a general feel of how far off or on your effect size is?
Is there a way to give power as a number between 0 and 1? - How to do this for more than 1 treatment (my attempts to do it for 4 all failed, I seem to miss places in the exampe I need to change. For example, I tried adding values to the beta-qlogis line above and others, but it gave errors)
You calculate your power based on the ability of models you fit to the simulated datasets to detect the effect. The general work flow is: 1. define "ground truth" model based on some theoretical assumptions 2. simulate lots of datasets of a certain size 3. fit models to those datasets. 4. number of models able to detect effect / number of models = power 5. if power to low, repeat with bigger simulated datasets
So you can't properly attempt this very way with 4 effects? The author implies that you can, but I'm getting the impression it'd be quite complicated... I'm not sure running each effect separately would be correct.
There are also "analytic" tools that try to compute power without simulation, see for exmaple Jake Westfall's Shiny apps (and read his papers on power issues in mixed effects models!): https://jakewestfall.shinyapps.io/pangea/ https://jakewestfall.shinyapps.io/crossedpower/
I'll try those, thanks very much! Best Diana