Skip to content
Prev 126147 / 398500 Next

Convergence problem in gam(mgcv)

Actually the answers to you questions may well be linked....
On Thursday 04 October 2007 22:11, Ariyo Kanno wrote:
--- This is not necessarily a problem. What does the mgcv.conv$rms.grad tell 
you? If it's near zero then convergence is probably fine. `fully.converged' 
is only set to TRUE if the gcv optimization terminates with a Newton step 
(and +ve definite hessian). In this circumstance you can be sure that it's 
uphill in all directions from the reported optimum. However, there are cases 
where the gcv score is flat (horizontal) in some direction, or nearly so. In 
this case it may be necessary to use steepest descent and the routine may 
terminate by failing to find a better set of smoothing parameters in the 
steepest descent direction.  The GCV score will be flat w.r.t. changes in a 
smoothing parameter that has an optimum value effectively at infinity.
gam.control arguments mgcv.tol, mgcv.half and rank.tol actually get passed 
through to magic.
--- large smoothing parameters are fairly normal and simply indicate heavy 
penalization, so I would interpret this as indicating linearity w.r.t. x2 or 
x3 rather than a problem (unless there is other reason to suspect a problem, 
of course.)

best,
Simon