Skip to content

power analysis is applicable or not

2 messages · array chip, David Winsemius

#
On Nov 12, 2013, at 7:42 PM, array chip wrote:

            
You said that: "Now one of the reviewers for the manuscript did a powering analysis for Mantel Haneszel test showing that with the sample sizes I have, the power for Mantel Haeszel test was only 50%. So he argued that I did not have enough power for Mantel Haenszel test."

This is rather interesting in its own right. Generally if you find that the p-value is 0.05 then you are exactly at the point where the post-hoc power (the power calculated on the basis of the observed differences and variances)  will be 0.50. In other words if you are right at the tipping point then a small perturbation in the data will tip you either way. And yet you said the p value was > 0.05 (although you didn't say how much greater.)  So I would say your power was certainly less and possibly materially less than 0.50. (This is dodging the question of power to detect exactly "what?". So far we have neither a discssion of the underlying scientific question nor any specifics.)
(That was incoherent.)
That is just wrong.
The question is whether you are justified in ignoring (or leaving out of the analysis) covariates that you thought a priori had a good chance of confounding the relationship of the predictors of interest on the outcome of interest. It appears that you have insufficient such justication. (And some statsiticians of excelletn repute would say you never have justification to do so regardless of any testing.)