Message-ID: <20040220093423.GA1613@nf034.jinr.ru>
Date: 2004-02-20T09:34:23Z
From: Timur Elzhov
Subject: Obtaining SE from the hessian matrix
In-Reply-To: <Pine.A41.4.58.0402190920130.44886@homer19.u.washington.edu>
On Thu, Feb 19, 2004 at 09:22:09AM -0800, Thomas Lumley wrote:
>> So, what is the _right_ way for obtatining SE? Why two those formulas above
>> differ?
>
> If you are maximising a likelihood then the covariance matrix of the
> estimates is (asymptotically) the inverse of the negative of the Hessian.
>
> The standard errors are the square roots of the diagonal elements of the
> covariance.
>
> So if you have the Hessian you need to invert it, if you have the
> covariance matrix, you don't.
Yes, the covariance matrix is inverse of the Hessian, that's clear.
But my queston is, why in the first example:
> sqrt(diag(2*out$minimum/(length(y) - 2) * solve(out$hessian)))
The 2 in the line above represents the number of parameters. A 95%
confidence interval would be the parameter estimate +/- 1.96 SE. We
can superimpose the least squares fit on a new plot:
- we don _not_ use simply 'sqrt(diag(solve(out$hessian)))', how in the
second example, but also include in some way "number of parameters" == 2?
What does '2*out$minimum/(length(y) - 2)' multiplier mean?
Thanks!
--
WBR,
Timur.