Problems with Optimization
On Wed, 20 Dec 2006, Tobias wrote:
Dear R-helpers, I am having following problem: Let P be an observed quantity, F(...) a function describing P, and e = P - F(...) the error. F(...) is essentially a truncated mean whose value is obtained via integrating from some value X to inf over a probability density with six parameters. That's what usually causes the problem: for certain parameter values, the integral goes very quickly to infinity which the optimization algorithm can't handle. At least nlm() and some of the optim() algorithms cant. The default optim() algorithm appears to be able to handle it (takes very long to converge though) and so is nlminb().
From the help page
Function 'fn' can return 'NA' or 'Inf' if the function cannot be
evaluated at the supplied value, but the initial value must have a
computable finite value of 'fn'. (Except for method '"L-BFGS-B"'
where the values should always be finite.)
so you are not being fair to the R developers (who were kind enough to
both implement and document this).
My question is thus not really about which algorithm to use but rather whether there is a 'on error ... do...' catcher in R? I have had a look at try() but I am not quite sure if that is what I am looking for. I essentially look for a command that, in plain English, allows me to specify that if the integral goes to infinity, skip these parameters, and simply continue optimizing into another direction.
Given that the underlying algorithms are in C not R, this is what returning NA asks them to do.
Is this possible? How do you guys handle situtations like this?
In the documented way, returning NA.
Brian D. Ripley, ripley at stats.ox.ac.uk Professor of Applied Statistics, http://www.stats.ox.ac.uk/~ripley/ University of Oxford, Tel: +44 1865 272861 (self) 1 South Parks Road, +44 1865 272866 (PA) Oxford OX1 3TG, UK Fax: +44 1865 272595