Skip to content

hessian fails for box-constrained problems when close to boundary?

2 messages · Matthieu Stigler, Paul Gilbert

2 days later
#
Matthieu

(regarding the final question, but not the ones before that.)

The short answer is that you seem to have a solution, since hessian in 
numDeriv works. The long answer follows.

Outside a constraint area one cannot assume that a function will 
evaluate properly. Some constraints are imposed because the function is 
not defined outside the region. This would mean that the gradient and 
hessian are not defined in the usual sense at the boundary; the "left" 
and "right" derivatives should exist and be the same in the limit. I 
have had on my to-do list for some time a project to add one-sided 
approximates in numDeriv, but that will probably not happen soon. If 
your solution is actually on the boundary, you might consider some 
substitution which removes the constraint, and consider the hessian on 
the reduced, unconstrained, space.

In your situation I would also be concerned about the optimum being very 
close, but not on the boundary. You might want to check that this is not 
an artifact of the convergence criteria. Check that the objective is 
actually inferior at a point on the boundary close to your optimum. 
Also, the gradient should be zero at the optimum, but that may not be 
exactly so because of numerical convergence criteria. If the gradient is 
pointing toward the boundary, that is a sign that you may just have 
converged too soon.

Also, the hessian coming from the optimization is built up as an 
approximation using information from the steps of the optimization. I am 
not familiar enough with the numerical handling of boundary constraints 
to know how this approximation from the optimization might be affected, 
but you may need to be careful in this regard.

Paul
On 12-11-15 10:25 AM, Matthieu Stigler wrote: