Skip to content

Complicated nls formula giving singular gradient message

3 messages · Jared Blashka, Phil Spector, dave fournier

#
Jared -
    Actually I didn't realize that nls would accept a formula
until I tried my simple example in reaction to your post :-)
    I don't recall nls() ever reporting the cause of the singular
gradient as being bad starting values -- I know I spend a lot
of time in my lectures on non-linear regression emphasizing that
bad starting values are the usual culprit when the dreaded 
"singular gradient" message rears its ugly head.
    I think your function has a fairly large "flat" region, wherein
changes in some of the parameters don't really effect the residual
sums of squares that much.  I think you can visualize it like this:
+                   LogKi=seq(-9.5,-8.5,by=.05),
+                   BMax =seq(1.8e05,2.8e05,by=.1e05))
If you look at the panels where LogKi is around -8.9 (the reported
maximum), the residual-sums-of-squares surface is pretty flat. 
I think you can also see regions where there isn't much change in
the residual sums of squares in this plot:
I also ran your data through proc nlp in sas (I know there are a lot of 
SAS-bashers on this list, but I worked there many years ago and I know 
the quality of their software), and got the following results:

                               Optimization Results
                                Parameter Estimates
                                                        Gradient
                                                       Objective
                    N Parameter         Estimate        Function

                    1 NS                0.006766       -0.121333
                    2 LogKi            -8.966402       -0.000509
                    3 BMax                237013    1.109368E-11

The message that nlp reported was

NOTE: At least one element of the (projected) gradient is greater than 1e-3.

Finally, I ran the the same model and data using nlfit in matlab, with all
values set to their defaults.  It reported the following without warning:

ans =

    1.0e+05 *

    0.000000086522054  -0.000089870065555   2.371354822440646

which agrees almost exactly with R.

Hope this helps.
                                                       - Phil
On Mon, 13 Dec 2010, Jared Blashka wrote:

            
#
I always enjoy these direct comparisons between different software packages.
I coded this up in AD Model Builder which is freely available at
http://admb-project.org   ADMB calculates exact derivatives via automatic
differentiation so it tends to be more stable for these difficult problems.

The parameter estimates are
# Number of parameters = 3
Objective function value = 307873.  Maximum gradient component = 1.45914e-06
# NS:
0.00865232633386
# LogKi:
-8.98700621813
# BMax:
237135.365156
The objective function is just least squares.
So it looks like SAS did pretty well before dying.