Skip to content

Dealing with heteroscedasticity in a GLM/M

4 messages · Leila Brook, Alan Haynes, Markus Jäntti +1 more

#
This is, strictly speaking, the wrong approach, but in order to explore the 
presence of heteroscedasticity, you could try to use the linear ME functions and 
the variance objects in that.

What I suggest by way of exploration is the following. If you regress a binomial 
response as if it were a countinuous variable in a standard OLS regression 
setting many problems arise,  including out of unit interval predictions and the 
error term is heteroscedastic. That heteroscedasticity is of a known form 
however, the variance being p*(1-p) where p is x*b is the linear predictor of 
the probability.

I would suggest you compare two models, both estimated using lme in the nlme 
package. One which models the response and includes a variance function that 
takes into account the heteroscedasticity induced by having a binary rather than 
continuous dependent variable. You then compare that with a model that adds, 
using the varComb() function, the heteroscedasticity you worry about.

Markus
On 08/23/2012 07:58 AM, Leila Brook wrote:

  
    
4 days later
#
On Thu, Aug 23, 2012 at 12:58 AM, Leila Brook <leila.brook at my.jcu.edu.au> wrote:
Stop there.

In Logit/Probit frameworks, the variance is assumed equal for all
groups. It is never estimated.  The model is not identified otherwise.
The effect of heteroskedasticity is not just inefficiency, but
parameter bias. This makes logit models much more suspicious than
previously believed. This means that all of the work you have done so
far to "validate" your model is dubious and you need to take a step
back.

We are in a bind with logit models. Either we estimate separate models
for the separate groups (to avoid heteroskedasticity), but we are not
able to compare coefficients across models because there is that
different, but un-estimated variance.  Or we fit one model that
combines the group, make the wrong assumption, and end up with wrong
parameter estimates.  I don't mean just a little off. I mean wrong.
Its discouraging.

As far as I know, this problem was first popularized by Paul Allison,
Scott Long, and Richard Williams, but it is nicely surveyed in this
review essay:

Mood, Carina. 2010. Logistic Regression: Why We Cannot Do What We
Think We Can DO, and What We Can Do About It. European Sociological
Review 26(1): 67-82.

That has cites to the earlier Allison paper and some of Williams's work.

In my opinion, there are no completely safe approaches to dealing with
the heteroskedastic group-level error.  Richard Williams at Notre Dame
gave an excellent presentation about it.  He told me he has a paper
forthcoming in the Stata journal about it, but I don't feel free to
pass it along to you. But I bet his website has more information.

It seems to me that if you try to "pin" one group as the "baseline
variance" group and then add properly structured random effects for
the other ones, you might get a handle on it. The R package dglm has
suggestions like that.

Good luck. If you get an answer, I'd really like to know what is the
state of the art now (this minute)...