Skip to content

partially linear models

2 messages · Liaw, Andy, Peter Dalgaard

#
From: Peter Dalgaard
I believe so.
A quote I heard from Prof. David Ruppert:  "There are lies, damned lies, and
then big O notations."

I presume the need to undersmooth is to reduce the bias of the `smooth'.
The problem is, by how much should one undersmooth, so the bias would go
from O(k*n^-4) to O(k*n^-5) (I'm just making this up, but you get the idea)?

Cheers,
Andy
#
"Liaw, Andy" <andy_liaw at merck.com> writes:
More like sacrificing the optimal O(n^-(2/5)) (?) convergence on the
smooth part so that the bias is reduced below O(n^-(1/2)) at the
expense of a bigger variance term in the MSE. The whole thing is
controlled by having the bandwidth of the smoother shrink as O(n^-q)
where q is, er, something...

And of course the big lie is that there are some unknown multipliers
that depend on the f that you are trying to estimate.