Skip to content

supplying gradient to constrOptim()

2 messages · Roger D. Peng, Thomas Lumley

#
Hi, I'm very interested in using the constrOptim() function currently in
the R-devel sources.  In particular, I'm trying to fit point process
conditional intensity models via maximum likelihood.  However, I noticed
that the gradient of the objective function must be supplied for all but
the Nelder-Mead method.  I was wondering why this was because optim()
itself does not require a gradient function for the BFGS or Conjugate
Gradient methods.

Thanks,

-roger
_______________________________
UCLA Department of Statistics
rpeng@stat.ucla.edu
http://www.stat.ucla.edu/~rpeng
#
On Fri, 17 Jan 2003, Roger Peng wrote:

            
In order not to need explicit gradients you need to be able to compute
them with finite differences. For general constrained optimisation the
points used for the finite differences should be inside the constraint
region (and problems will occur with the log barrier if they aren't).
With the built-in L-BFGS-B this isn't a problem -- you can find points
inside the `box' constraints to compute finite differences.  With
arbitrary linear constraints it is hard to find points in the feasible
region to compute finite difference approximations to the gradient.

If you knew that the unconstrained objective function was well-defined
just outside the constraint region it would be possible to compute a
gradient using finite differences for the objective function and
analytical gradients for the constraint part. Patches to do this would be
welcome, and it shouldn't be too hard.

	-thoms