Skip to content

Suggestion for the optimization code

6 messages · Mathieu Ribatet, Duncan Murdoch, Brian Ripley +2 more

#
Dear list,

Here's a suggestion about the different optimization code. There are 
several optimization procedures in the base package (optim, optimize, 
nlm, nlminb, ..). However, the output of these functions are slightly 
different. For instance,

   1. optim returns a list with arguments par (the estimates), value the
      minimum (maxima) of the objective function, convergence (optim
      .convergence)
   2. optimize returns a list with arguments minimum (or maximum) giving
      the estimates, objective the value of the obj. function
   3. nlm returns a list with arguments minimum giving the minimum of
      the obj. function, minimum the estimates, code the optim. convergence
   4. nlminb returns a list with arguments par (the estimates),
      objective, convergence (conv. code), evaluations

Furthermore, optim keeps the names of the parameters while nlm, nlminb 
don't.
s
I believe it would be nice if all these optimizers have a kind of 
homogenized output. This will help in writing functions that can call 
different optimizers. Obviously, we can write our own function that 
homogenized the output after calling the optimizer, but I still believe 
this will be more user-friendly.

Do you think this is a reasonable feature to implement - despite it 
isn't an important point?
Best,
Mathieu

* BTW, if this is relevant, I could try to do it.
#
On 8/8/2008 8:56 AM, Mathieu Ribatet wrote:
Unfortunately, changing the names within the return value would break a 
lot of existing uses of those functions.  Writing a wrapper to 
homogenize the output is probably the right thing to do.

Duncan Murdoch
#
On Fri, 8 Aug 2008, Mathieu Ribatet wrote:

            
This would be essentially impossible without breaking most existing code, 
and in the case of optimize() and nlminb() that goes back many years to 
uses in S(-PLUS).

  
    
#
Duncan Murdoch wrote:
And potentially to harmonize inputs. The MLInterfaces package 
(Bioconductor) has done this for many machine learning algorithms, 
should you want an example to look at.

   Robert

  
    
#
Ok, please consider it as a bad call.
Thanks for your answers.
Best,
Mathieu

Prof Brian Ripley a ?crit :

  
    
1 day later
#
Mathieu Ribatet <mathieu.ribatet <at> epfl.ch> writes:
Well, I don't think it's a _bad_ call; I think the
underlying wish (more flexibility in moving between
existing optimizers without changing the objective
function, calls, etc.) is valid -- but can really
only be achieved at this point by writing a wrapper
(Optimize(), or is that too confusing?), because
of backward compatibility issues.  I would also
like to see a more open framework in optim() [or
elsewhere], where one can more easily plug in
alternative optimization procedures.

  My version of mle (mle2, in bbmle) does something
like this, but in an ad hoc way -- it can use optim,
nlm, nlminb, or constrOptim as an optimization backend.
(I will also take a look at Robert Gentleman's code,
now ...)

  Ben Bolker