P value value for a large number of degree of freedom in lmer
On Tue, Nov 23, 2010 at 4:09 PM, Jonathan Baron <baron at psych.upenn.edu> wrote:
For the record, I have to register my disagreement. ?In the experimental sciences, the name of the game is to design a well-controlled experiment, which means that the null hypothesis will be true if the alternative hypothesis is false. ?People who say what is below, which includes almost everyone who responded to this post, have something else in mind. ?What they say is true in most disciplines. ?But when I hear this sort of thing, it is like someone is telling me that my research career as an EXPERIMENTAL psychologist has been some sort of delusion.
I would not take it that way. I agree there is a difference between some arbitrary null of no difference and a well designed control, but no matter what case, the null is a specific hypothesis. Given a continuous distribution, if you the probability of any constant occurring to an infinite decimal place is infinitely small. With only 100,000 observations:
dt(.49, df = 10^5) - dt(.5, df = 10^5)
[1] 0.001747051 Your career as an experimental psychologist is not a delusion, null hypothesis statistical testing is---even with a perfect control, we set up an unrealistic hypothesis. Now if we could set up the null as an interval....
If you have a very large sample and you are doing a correlational study, yes, everything will be significant. ?But if you do the kind of experiment we struggle to design, with perfect control conditions, you won't get significant results (except by chance) if your hypothesis is wrong.
I agree that this is typically a bigger problem for correlational studies, but if it became practical to run well-controlled experiments on millions of participants, I suspect p-values would be disregarded awfully quickly. Even then, the study was not pointless or a delusion, that kind of precision lets you confidently talk about the actual effect your treatment had compared to your well-designed control, and would give any applied person or practitioner a great guide what to expect if they implemented it in the field. x <- rnorm(10^6, mean = 0) y <- rnorm(10^6, mean = .01) t.test(x, y, var.equal = TRUE) Best regards, Josh (fan of experiments, correlational studies, & psychology...not so much of NHST, but you use what you have)
Jon On 11/24/10 07:59, Rolf Turner wrote:
It is well known amongst statisticians that having a large enough data set will result in the rejection of *any* null hypothesis, i.e. will result in a small p-value. ?There is no ``bias'' involved.
-- Jonathan Baron, Professor of Psychology, University of Pennsylvania Home page: http://www.sas.upenn.edu/~baron Editor: Judgment and Decision Making (http://journal.sjdm.org)
_______________________________________________ R-sig-mixed-models at r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models
Joshua Wiley Ph.D. Student, Health Psychology University of California, Los Angeles http://www.joshuawiley.com/