Skip to content

regression with uncertainty in both variables

3 messages · Matthew Wiener, Brian Ripley, Stuart Luppescu

#
Hi, all.

I'm trying to use some linear regression models in which both the
dependent and independent variables are measured with some error.  To
make things worse, while the errors in the dependent variable are uniform,
the errors in the independent (or explanatory, or "x") variables can be
heteroskedastic.  I've been looking at the book _Measurement Error Models_
by Fuller (1987).  I'm wondering whether anybody knows any other
references on the subject, and whether anyone has written S or R code that
handles these kinds of problems.  (As far as I can tell, the usual lm and
glm functions don't; if I'm wrong, that's great.)

Thanks for any pointers.

Matt Wiener
Laboratory of Neuropsychology
NIMH, NIH
Bethesda, MD
USA

-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-
r-help mailing list -- Read http://www.ci.tuwien.ac.at/~hornik/R/R-FAQ.html
Send "info", "help", or "[un]subscribe"
(in the "body", not the subject !)  To: r-help-request at stat.math.ethz.ch
_._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._
#
[I don't think blanketing both S and R help on this is a good idea.]
On Wed, 14 Apr 1999, Matthew Wiener wrote:
Notation: that is not a `regression' model.
Well, there are of lots of other references, even one by me, but that is
the main book. If lm and glm did this they would be wrong: it is a
different statistical model. There are several Fortran solutions on statlib
in multi/leiv?

1.      Programs for best line fitting with errors in both coordinates.

2.      D. York, "Least squares fitting of a straight line", Canadian 
                 Journal of Physics, 44, 1079-1086, 1986.
        G. Fasano and R. Vio, "Fitting straight lines with errors on both
                 coordinates", Newsletter of Working Group for Modern 
                 Astronomical Methodology, No. 7, 2-7, Sept. 1988.
        B.D. Ripley and M. Thompson, "Regression techniques for the 
                 detection of analytical bias", Analyst, 112, 377-383,
                 1987.

and I have seen an S interface somewhere (multiv?). Today, I would take the
algorithm in the last of those papers and re-write it in S in a few
minutes. A project that has, several times, nearly made the MASS library
and one I set as a exercise in my linear models course.
#
On 15-Apr-99 Matthew Wiener wrote:
This really has nothing to do with R (I don't think, anyway -- could it be done
with lme?), but we do this kind of thing all the time using the HLM program
(Bryk, Raudenbush and Congden). At level 1, the outcome is the measure divided
by the standard error, and a series of dummies, one for each type of measure,
divided by the standard error. The level 1 variance is fixed at 1.0. These
1/s.e.,  come down to level 2, where they can be considered ``true score''
estimates of the measures, as outcomes, or as predictors, if you use  the
latent variable regression capability of HLM. I believe this capability is in
the latest version of the HLM program (maybe not -- we use a pre-release
version here), and the procedure should be detailed in the 2nd edition of Bryk
and Raudenbush, _Hierarchical Linear Models_, to be published (by Sage again?)
this summer or fall.

______________________________________________________________________
Stuart Luppescu         -=-=-  University of Chicago
ºÍʸ ¤ÈÃÒÆàÈþ¤ÎÉã(EUC)  -=-=-  s-luppescu at uchicago.edu
http://www.consortium-chicago.org/people/sl/sl.html
ICQ #21172047  AIM: psycho7070
"Ubi non accusator, ibi non judex."

(Where there is no police, there is no speed limit.)
                -- Roman Law, trans. Petr Beckmann (1971)
-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-
r-help mailing list -- Read http://www.ci.tuwien.ac.at/~hornik/R/R-FAQ.html
Send "info", "help", or "[un]subscribe"
(in the "body", not the subject !)  To: r-help-request at stat.math.ethz.ch
_._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._