Skip to content

Power analysis for MANOVA?

2 messages · Rick Bilonick, Adam D. I. Kramer

#
On Wed, 2009-01-28 at 21:21 +0100, Stephan Kolassa wrote:
The point of the article was that doing a so-called "retrospective"
power analysis leads to logical contradictions with respect to the
confidence intervals and p-values from the analysis of the data. In
other words, DON'T DO IT! All the information is contained in the
confidence intervals which are based on the observed data - an after the
fact "power analysis" cannot provide any insight - it's not data
analysis.

Rick B.
#
Hi Rick,

 	I understand the authors' point and also agree that post-hoc power
analysis is basically not telling me anything more than the p-value and
initial statistic for the test I am interested in computing power for.

 	Beta is a simple function of alpha, p, and the statistic.

 	My intention is, as I mentioned in my response to Stephan Kolassa,
to transform my p-value and statistic into a form of effect size--sample
size necessary to attain significance at alpha=.05. This will communicate no
more information, it is just a mathematical re-representation of my data in
a way I believe my readers will find more informative and useful. In other
words, there is no more information *encoded*, but there is more information
*communicated,* just like for any effect size measure.

 	If you have any suggestions on a more reliable effect size for
MANOVA which is *also* commonly known in the social psychology community
(e.g., a correlation or Cohen's d analogue), I'm interested--but the
multivariate nature of the beast makes these more or less impossible to
translate.

 	The poster I was asking for is now printed, and we reported the
multivariate R-squared using the techniques in Cohen (1988), though I'm
expecting to spend a lot of time explaining what that means to people in a
multivariate context, rather than describing the results of the study.

Cordially,
Adam D. I. Kramer
On Sun, 1 Feb 2009, Rick Bilonick wrote: