Skip to content

Numerical optimisation and "non-feasible" regions

6 messages · Ben Bolker, Mathieu Ribatet, Patrick Burns

#
Dear list,

I'm currently writing a C code to compute the (composite) likelihood - 
well this is done but not really robust. The C code is wrapped in an R 
one which call the optimizer routine - optim or nlm. However, the 
fitting procedure is far from being robust as the parameter space 
depends on the parameter - I have a covariance matrix that should be a 
valid one for example.

Currently, I set in my header file something like #define MINF -1.0e120 
and test if we are  in a non-feasible region, then setting the 
log-composite likelihood to MINF. The problem I see with this approach 
is that for a quite large non-feasible region, we have a kind of plateau 
where the  log-composite likelihood  is constant and may have potential 
issues with the optimizer. The other issue is that the gradient is now 
badly estimated using finite-differences.

Consequently, I'm not sure this is the most relevant approach as it 
seems that (especially the BFGS method, probably due to the estimation 
of the gradient) the optimization is really sensitive to this "strategy" 
and fails (quite often).

As I'm (really) not an expert in optimization problems, do you know good 
ways to deal with non-feasible regions? Or do I need to reparametrize my 
model so that all parameters belong to $\mathbb{R}$ - which should be 
not so easy...

Thanks for your expertise!
Best,
Mathieu
#
Mathieu Ribatet <mathieu.ribatet <at> epfl.ch> writes:
One reasonably straightforward hack to deal with this is
to add a penalty that is (e.g.) a quadratic function of the
distance from the feasible region, if that is reasonably
straightforward to compute -- that way your function will
get gently pushed back toward the feasible region.

  Ben Bolker
#
Thanks Ben for your tips.
I'm not sure it'll be so easy to do (as the non-feasible regions depend 
on the model parameters), but I'm sure it's worth giving a try.
Thanks !!!
Best,

Mathieu

Ben Bolker a ?crit :

  
    
#
If the positive definiteness of the covariance
is the only issue, then you could base a penalty on:

eps - smallest.eigen.value

if the smallest eigen value is smaller than eps.

Patrick Burns
patrick at burns-stat.com
+44 (0)20 8525 0696
http://www.burns-stat.com
(home of S Poetry and "A Guide for the Unwilling S User")
Mathieu Ribatet wrote:
#
Dear Patrick (and other),

Well I used the Sylvester's criteria (which is equivalent) to test for 
this. But unfortunately, this is not the only issue!
Well, to sum up quickly, it's more or less like geostatistics. 
Consequently, I have several unfeasible regions (covariance, margins and 
others).
The problem seems that the unfeasible regions may be large and sometimes 
lead to optimization issues - even when the starting values are well 
defined.
This is the reason why I wonder if setting by myself a $-\infty$ in the 
composite likelihood function is appropriate here.

However, you might be right in setting a tolerance value 'eps' instead 
of the theoretical bound eigen values > 0.
Thanks for your tips,
Best,
Mathieu


Patrick Burns a ?crit :

  
    
#
If I understand your proposal correctly, then it
probably isn't a good idea.

A derivative-based optimization algorithm is going
to get upset whenever it sees negative infinity.
Genetic algorithms, simulated annealing (and I think
Nelder-Mead) will be okay when they see infinity
but if all infeasible solutions have value negative infinity,
then you are not giving the algorithm a clue about what
direction to go.

Pat
Mathieu Ribatet wrote: