Skip to content

glm(binomial) vs. logistf

2 messages · Drew Tyre, Gavin Simpson

#
After just a quick look I think one reason is that objects created with logistf() don't have as many methods for them. For example, I frequently use the predict() method with fitted models, and there is no predict method for logistf fits. Doesn't mean there couldn't be, but the code hasn't been written yet. 

--?
Drew Tyre

School of Natural Resources
University of Nebraska-Lincoln
416 Hardin Hall, East Campus
3310 Holdrege Street
Lincoln, NE 68583-0974

phone: +1 402 472 4054?
fax: +1 402 472 2946
email: atyre2 at unl.edu
http://snr.unl.edu/tyre
http://aminpractice.blogspot.com
http://www.flickr.com/photos/atiretoo

-----Original Message-----
From: R-sig-ecology [mailto:r-sig-ecology-bounces at r-project.org] On Behalf Of Martin Weiser
Sent: Thursday, October 29, 2015 2:11 PM
To: r-sig-ecology at r-project.org
Subject: [R-sig-eco] glm(binomial) vs. logistf

Dear friends,

Is there any reason why to run logistic regression (binomial response) by glm() and not by logistf() by default? In particular when having sparse data (e.g. 8 presences in 100  samples), frequently with quasi-separation (all presences at one level of the predictor, together with many absences).

I tried to read some papers by G. Heinze - I did not get the whole thing, but it seems to me that both terms estimation and testing procedure should be more reliable using logistf(). Am I wrong? 

So, is there any reason why to use binomial glm?
I am sorry for my ignorance - there should be a reason why people stick to glm() - I just do not know what it is. Could you explain it to me or point me to something to read, please? I am not a statistician by training, however.

Thank you for your patience.

Kind regards,
Martin W.
#
If it is Firth's procedure that you are after, the **brglm** package does
that and has most if not all of the standard methods for models, including
a `predict()` method.

You might also wish to consider the **arm** package and its `bayesglm()`
function, which employs different priors that also handle the separation
issue in binomial GLMs. The reference cited in `?arm::bayesglm` has some
discussion of this.

HTH

G
On 29 October 2015 at 14:45, Drew Tyre <atyre2 at unl.edu> wrote: