Of course, in MLE, if we collect more and more observations data, MLE will perform better and better. But is there a way to find a bound on parameter estimate errors in order to decide when to stop collect data/observations, say 1000 observations are great, but 500 observations is good enough... Thanks!
how do I decide the best number of observations in MLE?
2 messages · Michael, Ben Bolker
losemind wrote:
Of course, in MLE, if we collect more and more observations data, MLE will perform better and better. But is there a way to find a bound on parameter estimate errors in order to decide when to stop collect data/observations, say 1000 observations are great, but 500 observations is good enough...
This general area is called _power analysis_. There is no
sharp cutoff, but people often pick some threshold (such
as reducing the probability of type II errors below 20%,
i.e. power=80%, or specifying some cutoff on the standard
error of a parameter ... for simple cases, there are analytic
solutions (apropos("power")), for more complex cases this
generally has to be done by simulation.
View this message in context: http://www.nabble.com/how-do-I-decide-the-best-number-of-observations-in-MLE--tp23773503p23779047.html Sent from the R help mailing list archive at Nabble.com.