Skip to content

Different LLRs on multinomial logit models in R and SPSS

8 messages · Ben Bolker, Sören Vogel, sovo0815 at gmail.com +1 more

#
Hello, after calculating a multinomial logit regression on my data, I
compared the output to an output retrieved with SPSS 18 (Mac). The
coefficients appear to be the same, but the logLik (and therefore fit)
values differ widely. Why?

The regression in R:

set.seed(1234)
df <- data.frame(
  "y"=factor(sample(LETTERS[1:3], 143, repl=T, prob=c(4, 1, 10))),
  "a"=sample(1:5, 143, repl=T),
  "b"=sample(1:7, 143, repl=T),
  "c"=sample(1:2, 143, repl=T)
)
library(nnet)
mod1 <- multinom(y ~ ., data=df, trace=F)
deviance(mod1) # 199.0659
mod0 <- update(mod1, . ~ 1, trace=FALSE)
deviance(mod0) # 204.2904

Output data and syntax for SPSS:

df2 <- df
df2[, 1] <- as.numeric(df[, 1])
write.csv(df2, file="dfxy.csv", row.names=F, na="")
syntaxfile <- "dfxy.sps"
cat('GET DATA
  /TYPE=TXT
  /FILE=\'', getwd(), '/dfxy.csv\'
  /DELCASE=LINE
  /DELIMITERS=","
  /QUALIFIER=\'"\'
  /ARRANGEMENT=DELIMITED
  /FIRSTCASE=2
  /IMPORTCASE=ALL
  /VARIABLES=
  y "F1.0"
  a "F8.4"
  b "F8.4"
  c "F8.4".
CACHE.
EXECUTE.
DATASET NAME DataSet1 WINDOW=FRONT.

VALUE LABELS
  /y 1 "A" 2 "B" 3 "C".
EXECUTE.

NOMREG y (BASE=1 ORDER=ASCENDING) WITH a b c
  /CRITERIA CIN(95) DELTA(0) MXITER(100) MXSTEP(5) CHKSEP(20)
LCONVERGE(0) PCONVERGE(0.000001)
    SINGULAR(0.00000001)
  /MODEL
  /STEPWISE=PIN(.05) POUT(0.1) MINEFFECT(0) RULE(SINGLE)
ENTRYMETHOD(LR) REMOVALMETHOD(LR)
  /INTERCEPT=INCLUDE
  /PRINT=FIT PARAMETER SUMMARY LRT CPS STEP MFI IC.
', file=syntaxfile, sep="", append=F)

-> Loglik0: 135.02
-> Loglik1: 129.80

Thanks, S?ren
#
On Jan 6, 2011, at 11:06 AM, S?ren Vogel wrote:

            
The likelihood is arbitrary. It is the difference in likelihoods that  
is important and in this respect the answers you got from the two  
software packages (delta deviance= 5.22) is equivalent. If you run  
logistic models with individual records versus using grouped data for  
equivalent models with the same software, the reported deviance will  
be widely different but the comparison of nested models will imply the  
same inferential conclusions.
#
S?ren Vogel <sovo0815 <at> gmail.com> writes:
Since constants that are independent of the model
parameters can be left out of a log-likelihood calculation
without affecting inference among models, it is quite common
for likelihoods to be calculated differently in different
software packages (for example, the (n/2 log(2*pi)) constant in the
Gaussian log-likelihood) -- this is true even among different
computations within the R ecosystem.  What's important is the difference
among log-likelihoods between models, which is the same (up to
the numeric precision of what you've shown us) for both
software packages.
[1] 5.22
[1] 5.2245

  Ben Bolker
#
Thanks for your replies. I am no mathematician or statistician by far,
however, it appears to me that the actual value of any of the two LLs
is indeed important when it comes to calculation of
Pseudo-R-Squared-s. If Rnagel devides by (some transformation of) the
actiual value of llnull then any calculation of Rnagel should differ.
How come? Or is my function wrong? And if my function is right, how
can I calculate a R-Squared independent from the software used?

Rfits <- function(mod) {
  llnull <- deviance(update(mod, . ~ 1, trace=F))
  llmod <- deviance(mod)
  n <- length(predict(mod))
  Rcs <- 1 - exp( (llmod - llnull) / n )
  Rnagel <- Rcs / (1 - exp(-llnull/n))
  out <- list(
    "Rcs"=Rcs,
    "Rnagel"=Rnagel
  )
  class(out) <- c("list", "table")
  return(out)
}
#
On Jan 6, 2011, at 11:23 AM, S?ren Vogel wrote:

            
You have two models in that function, the null one with ".~ 1" and the  
origianl one and you are getting a ratio on the likelihood scale  
( which is a difference on the log-likelihood or deviance scale).

  
    
#
On Thu, 6 Jan 2011, David Winsemius wrote:

            
If this is the case, calculating 'fit' indices for those models 
must end up in different fit indices depending on software:

n <- 143
ll1 <- 135.02
ll2 <- 129.8
# Rcs
(Rcs <- 1 - exp( (ll2 - ll1) / n ))
# Rnagel
Rcs / (1 - exp(-ll1/n))
ll3 <- 204.2904
ll4 <- 199.0659
# Rcs
(Rcs <- 1 - exp( (ll4 - ll3) / n ))
# Rnagel
Rcs / (1 - exp(-ll3/n))

The Rcs' are equal, however, the Rnagel's are not. Of course, this 
is no question, but I am rather confused. When publishing results 
I am required to use fit indices and editors would complain that 
they differ.

S?ren
#
On Jan 7, 2011, at 8:26 AM, sovo0815 at gmail.com wrote:

            
It is well known that editors are sometimes confused about statistics,  
and if an editor is insistent on publishing indices that are in fact  
arbitrary then that is a problem. I would hope that the editor were  
open to education. (And often there is a statistical associate editor  
who will be more likely to have a solid grounding and to whom one can  
appeal in situations of initial obstinancy.)  Perhaps you will be  
doing the world of science a favor by suggesting that said editor  
first review a first-year calculus text regarding the fact that  
indefinite integrals are only calculated up to a arbitrary constant  
and that one can only use the results in a practical setting by  
specifying the limits of integration. So it is with likelihoods. They  
are only meaningful when comparing two nested models. Sometimes the  
software obscures this fact, but it remains a statistical _fact_.

Whether you code is correct (and whether the Nagelkerke "R^2" remain  
invariant with respect to such transformations) I cannot say. (I  
suspect that it would be, but I have never liked the NagelR2 as a  
measure, and didn't really like R^2 as a measure in linear regression  
for that matter, either.) I focus on fitting functions to trends,  
examining predictions, and assessing confidence intervals for  
parameter estimates. The notion that model fit is well-summarized in a  
single number blinds one to other critical issues such as the  
linearity and monotonicity assumptions implicit in much of regression  
(mal-)practice.

So, if someone who is more enamored of (or even more knowledgeably  
scornful of)  the Nagelkerke R^2 measure wants to take over here, I  
will read what they say with interest and appreciation.
David Winsemius, MD
West Hartford, CT
2 days later
#
Hello, thanks for all your replies, it was a helpful lesson for me 
(and hopefully for my colleagues, too). Bests, S?ren
On 11-01-07 11:23, David Winsemius wrote: