Message-ID: <23779047.post@talk.nabble.com>
Date: 2009-05-29T12:27:19Z
From: Ben Bolker
Subject: how do I decide the best number of observations in MLE?
In-Reply-To: <b1f16d9d0905281951l26b2404ag86ef6b29aba9d5f1@mail.gmail.com>
losemind wrote:
>
> Of course, in MLE, if we collect more and more observations data, MLE
> will perform better and better.
>
> But is there a way to find a bound on parameter estimate errors in
> order to decide when to stop collect data/observations, say 1000
> observations are great, but 500 observations is good enough...
>
>
This general area is called _power analysis_. There is no
sharp cutoff, but people often pick some threshold (such
as reducing the probability of type II errors below 20%,
i.e. power=80%, or specifying some cutoff on the standard
error of a parameter ... for simple cases, there are analytic
solutions (apropos("power")), for more complex cases this
generally has to be done by simulation.
--
View this message in context: http://www.nabble.com/how-do-I-decide-the-best-number-of-observations-in-MLE--tp23773503p23779047.html
Sent from the R help mailing list archive at Nabble.com.