Skip to content
Prev 5804 / 20628 Next

mysterious syntax error in call to nlmer

On Sun, Mar 27, 2011 at 4:36 PM, Ben Bolker <bbolker at gmail.com> wrote:
It might, in fact, be the other way around.  Compare the
log-likelihoods from the two model fits. If I haven't botched the
calculations (which is entirely possible) then the nlmer fit is better
than the nlme fit.  It may be that the values of the log-likelihood
are not comparable because they are calculated in different ways, but
I think they should be.

Also, if you look at the verbose output from the nlmer fit
+             Y ~ dd(xintercept,amplitude,frequency,xshift,X)~xshift|ID,
+             data=mdoot, verbose=1L,
+             start=list(fixef=c(xintercept=1, amplitude=1.5,
+                     frequency=1, xshift=pi/2))
+             )
  0:     3674.0646: 0.163299  1.00000  1.50000  1.00000  1.57080
  1:     2272.5273: 0.973647 0.530645  1.29359 0.721209  1.62290
  2:     2205.2381: 0.983334 0.268247  1.04419  1.65332  1.61921
  3:     1455.5351:  1.00046 0.0718815 0.587221  1.70113  1.62596
  4:     1142.6573:  1.03086 0.0733082 0.103787  1.57847  1.64384
  5:     1124.2213:  1.02956 0.0703039 0.0140006  1.58682  1.64493
  6:     1123.1203:  1.02950 0.0756640 -0.00544582  1.58649  1.64516
  7:     1123.0674:  1.02949 0.0767893 -0.00860760  1.58666  1.64509
  8:     1123.0623:  1.02952 0.0808718 -0.0108216  1.58862  1.64064
  9:     1123.0551:  1.02935 0.0753004 -0.0137590  1.58970  1.63856
 10:     1123.0272:  1.02922 0.0774450 -0.0109283  1.59142  1.63312
 11:     1123.0199:  1.02904 0.0789350 -0.0160090  1.59543  1.63213
 12:     1122.8941:  1.03216 0.0702361 -0.0223557  1.61759  1.59772
 13:     1122.5006:  1.03967 0.0634442 -0.0291930  1.64563  1.51852
 14:     1121.9584:  1.03941 0.0795832 -0.0274689  1.65187  1.51782
 15:     1121.5795:  1.03778 0.0660772 -0.0362778  1.68783  1.51358
 16:     1121.0556:  1.02974 0.0854996 -0.0365876  1.71205  1.44101
 17:     1120.7508:  1.01744 0.0746180 -0.0410671  1.73067  1.36579
 18:     1120.5819: 0.976987 0.0787048 -0.0405667  1.73702  1.29796
 19:     1120.5238: 0.940846 0.0741361 -0.0351745  1.72659  1.28897
 20:     1119.8662: 0.704002 0.0724824 -0.0444264  1.72337 0.936691
 21:     1119.8438: 0.339802 0.0914764 -0.0442596  1.70652 0.719874
 22:     1118.9071: 0.130980 0.0775044 -0.0582427  1.71383 0.751868
 23:     1118.4469: 0.102282 0.0691767 -0.0647637  1.69347 0.960971
 24:     1118.2405:  0.00000 0.0786482 -0.0690935  1.71164  1.11381
 25:     1118.2366:  0.00000 0.0782510 -0.0712534  1.69999 0.971243
 26:     1118.2176:  0.00000 0.0756729 -0.0705572  1.70607  1.04360
 27:     1118.2155: 0.00737987 0.0770602 -0.0705417  1.70661  1.04500
 28:     1118.2152:  0.00000 0.0766721 -0.0709915  1.70552  1.03690
 29:     1118.2150:  0.00000 0.0768363 -0.0707948  1.70614  1.04127
 30:     1118.2150:  0.00000 0.0767926 -0.0707770  1.70609  1.04074
 31:     1118.2150:  0.00000 0.0767976 -0.0707763  1.70608  1.04074
 32:     1118.2150: 5.81067e-06 0.0767978 -0.0707754  1.70608  1.04074

you will see that the first parameter, which is the relative standard
deviation of the random effects,  stays close to 1 for a long time
before it drops.  And, furthermore the difference in the deviance
between, say, iteration 20 and iteration 32 is very small.  So I think
the lesson is that the standard deviation of the random effects is
very poorly determined.