Skip to content

how do I decide the best number of observations in MLE?

2 messages · Michael, Ben Bolker

#
Of course, in MLE, if we collect more and more observations data, MLE
will perform better and better.

But is there a way to find a bound on parameter estimate errors in
order to decide when to stop collect data/observations, say 1000
observations are great, but 500 observations is good enough...

Thanks!
#
losemind wrote:
This general area is called _power analysis_.  There is no
sharp cutoff, but people often pick some threshold (such
as reducing the probability of type II errors below 20%,
i.e. power=80%, or specifying some cutoff on the standard
error of a parameter ...  for simple cases, there are analytic
solutions (apropos("power")), for more complex cases this
generally has to be done by simulation.