Skip to content

optimization problem

4 messages · Hans W Borchers, Mike Prager

#
tedzzx <zengzhenxing at gmail.com> wrote:

            
I sometimes use optim() within a loop, with random starting
values for each iteration of the loop. You can save the
objective function value each time and pick the best solution.
Last time I did that, I ran it 100 times.

That procedure does not guarantee finding the global minimum.
However, it does make it *more likely* to find the global minmum
*within the range of your starting values*.

Often, I make a boxplot of the various results. If they don't
show a strong mode, there is a data or model problem that needs
to be addressed. For example, the solution may be poorly defined
by the data, or the model may be specified with confounded
parameters.
#
Why not use one of the global optimizers in R, for instance 'DEoptim', and
then apply optim() to find the last six decimals? I am relatively sure that
the Differential Evolution operator has a better chance to come near a
global optimum than a loop over optim(), though 'DEoptim' may be a bit slow
(only for quite large numbers of parameters).

Regards,  Hans Werner
Mike Prager wrote:

  
    
#
"Hans W. Borchers" <hwborchers at googlemail.com> wrote:

            
Thanks for the reference. I will see if 'DEoptim' might be
useful in future problems.

HWB asked, why not use 'DEoptim' rather than a loop? Perhaps
that's a rhetorical question, but I'll answer it anyway, in the
context of the specific problem I am solving. (1) I did not know
that 'DEoptim' existed. (2) After starting a problem with 'nls',
I changed its structure slightly, which meant a change to
'optim'. Because the two functions have totally different
syntaxes, it was necessary to rewrite the entire script and its
supporting functions. Adding a loop was much simpler than
looking for yet *another* optimizer in R. (3) In the current
problem, perhaps 97 of 100 runs of 'optim' come to the same
solution (the best one found). That suggests that this is not a
terribly difficult problem and that there is little to be gained
by employing a different approach.

SOMEONE once posted about an R function that masked the syntax
differences among (at least some) R optimizers. That surely
would lower the barrier to switching among them. I've lost that
post, and my search has not turned it up. If that poster is
reading this, would you please respond with the information?

ALSO, is anyone aware of any document comparing the various
optimizers available in R (even in core R)?  What are the
different intended applications, and when would each be
preferred? There is some helpful material in MASS 4, but I am
hoping for something more recent and detailed.
#
In case anyone is still reading this thread, I want to add this:
In a current problem (a data-shy five-parameter nonlinear
optimization), I found "nlminb" markedly more reliable than
"optim" with method "L-BFGS-B". In reviewing the fit I made, I
found that "optim" only came close to its own minimum in about
13 of 120 trials (same data, different starting values). I
previously said 97, but I was clearly looking at the wrong data!
In contrast, "nlminb" came to that best answer in about 92
trials out of 120.

The original poster might consider "nlminb" instead of "optim".
Because nonlinear optimization is sensitive to starting values,
I would still advise solving the problem a number of times to
see if a clear minimum solution emerges.