Skip to content

How can I estimate deviance explained of a mixed gamm?

3 messages · Jon Lopez, Ben Bolker, Michael Cone

#
Dear mixed modelers,

I have already asked about this issue but never recived an answer
back. So I will try again.
I have been modelling fish biomass according to some environmental
parameters using mixed
effect models (gamm4 package). I don't want to bore you with the
details of my models since
I believe that they are not significant to the point of this message.
However, please feel
free to ask me about anything in case you think it is important. I
have some GAMM
candidates already. I am able to get AIC, BIC, R-sq, ... scores for
these models but,
unfortunately, I can't obtain deviance explained from them.

I have found an interesting procedure to try to derive it, published by
Gilman and colleagues in 2012. Here is the complete reference in case any
of you want to take a look to it:

"Gilman, E., Chaloupka, M., Read, A., Dalzell, P., Holetschek, J., Curtice,
C., 2012. Hawaii longline tuna fishery temporal trends in standardized
catch rates and length distributions and effects on pelagic and seamount
ecosystems. Aquatic Conservation: Marine and Freshwater Ecosystems 22(4),
446-488."

Nevertheless, the procedure explained in the paper above do not provide us
with the exact score. Thus, I have been considering other options like
using the deviance explained of a equivalent GAM with the random effect as
a spline term [s(x, bs="re")] but I don't know how accurate it would be.

Do you think both options can be used as an approximation for the
GAMM's deviance
explained? What are your feelings on that?

Any suggestion would be appreciated,

Thousands of thanks,

Jon Lopez

---------------------------------
PhD candidate
AZTI-Tecnalia, Spain
#
On 14-09-27 01:35 PM, Jon Lopez wrote:
The problem with determining "accuracy" is that we don't really
know what you're trying to measure when you say you want to quantify
"deviance explained".  The variety of solutions for computing measures
of goodness of fit for GLMs (Nagelkerke, Cox and Snell, etc.), for
LMMs, and for GLMMs suggests that the problem is more of defining
a sensible metric than computing it.  So can you be more precise
about what you want?

  I don't know.  *If* the deviances returned by gamm4 and lme4
are comparable (I don't know whether they are), then presumably
you just compute them both?

For reference, the Gilman et al. paper says:

There is no accepted way to formally estimate model fit for GAMMs
(Wood, 2006; Zuur et al., 2009). We developed and implemented an
approach by fitting an equivalent GAM to derive the percentage
deviance explained (a measure of GAM goodness-of-fit: see Hastie and
Tibshirani, 1990), and to evaluate the importance of explicitly
accounting for trip- and set-specific heterogeneity (the random
effects attributable to the sampling design constraints) using a
GAMM. This method had the following steps: (i) fit a GAM using the
same data and fixed effect variables as used in the GAMM and extract
the deviance residuals; (ii) fit a linear mixed effects model to the
residuals using a constant parameter only model with both trip and set
as the random effects; (iii) fit a linear fixed effects model to the
residuals using a constant parameter only model; and (iv) compare the
fit of the two linear models using Akaike Information Criterion (AIC)
and a log-likelihood ratio test (Wood, 2006). A smaller comparative
AIC value indicates a relatively better fitting model, and the formal
log-likelihood ratio test determines if the difference in deviance
between the GAMM (linear mixed effects regression) and GAM (linear
regression) models was significant. Hence, using both AIC as a guide
and the log-likelihood ratio test as a formal test we determined
whether inclusion of random effects was necessary. If the inclusion of
the random effects was found to be necessary, then we expect the GAMM
would account for more of the deviance than the equivalent GAM.
1 day later
#
Hello Jon,

if I understand you correctly, you are looking for a metric like R^2 - 
"variation in the outcomes accounted for by the model". I don't have 
anything insightful to answer myself, but maybe this, by Douglas Bates, 
is relevant: 
http://marc.info/?l=r-sig-mixed-models&m=126719474831488&w=2

I quote:

"Assuming that one wants to define an R^2 measure, I think an argument
could be made for treating the penalized residual sum of squares from
a linear mixed model in the same way that we consider the residual sum
of squares from a linear model.  Or one could use just the residual
sum of squares without the penalty or the minimum residual sum of
squares obtainable from a given set of terms, which corresponds to an
infinite precision matrix.  I don't know, really.  It depends on what
you are trying to characterize.

In other words, what's the purpose?  What aspect of the R^2 for a
linear model are you trying to generalize?

I'm sorry if I sound argumentative but discussions like this sometimes
frustrate me.  A linear mixed model does not behave exactly like a
linear model without random effects so a measure that may be
appropriate for the linear model does not necessarily generalize.  I'm
not saying that this is the case but if the request is "I don't care
what the number means or if indeed it means anything at all, just give
me a number I can report", that's not the style of statistics I
practice.

I regard Bill Venables' wonderful unpublished paper "Exegeses on
Linear Models" (just put the name in a search engine to find a copy -
there is only one paper with "Exegeses" and "Linear Models" in the
title) as required reading for statisticians.  As Bill emphasizes in
that paper, statistics is not just a collection of formulas (many of
which are based on approximations).  It's about models and comparing
how well different models fit the observed data.  If we start with a
formula and only ask ourselves "How do we generalize this formula?"
we're missing the point.  We should start at the model.

In a linear model the R^2 statistic is a dimensionless comparison of
the quality of the current model fit, as measured by the residual sum
of squares, to the fit one would obtain from a trivial model.  When
the current model can be shown to contain a model with an intercept
term only (and whose coefficient will be estimated by the mean
response) then that model fit is the trivial model.  Otherwise the
trivial model is a prediction of zero for each response.  We know that
the trivial model will produce a greater residual sum of squares than
the current model fit because the models are nested.  The R^2 is the
proportion of variability not accounted for by the trivial model but
accounted for by the current model (my apologies to my grammar
teachers for having juxtaposed prepositions).

The interesting point there is that when you think of the
relationships between models you can determine how you handle the case
of a model that does not have an intercept term.  If you start from
the formula instead you can end up calculating a negative R^2 because
you compare models that are not nested.  Such nonsensical results are
often reported.  (I think it was the Mathematica documentation that
gave a careful explanation of why you get a negative R^2 instead of
recognizing that the formula they were using did not apply in certain
cases.)

It may be that there is a sensible measure of the quality of fit from
a linear mixed model that generalizes the R^2 from a linear model.  I
don't see an obvious candidate but I will freely admit that I haven't
thought much about the problem.  I would ask others who are thinking
about this to consider both the "what" and the "why".  George
Mallory's justification of "because it's there" for attempting to
climb Everest is perhaps a good justification for such endeavors
(Mallory may have questioned his rationale as he lay freezing to death
on the mountain).  I don't think it is a good justification for
manipulating formulas."

Best regards,
Michael
On 27.09.2014 19:35, Jon Lopez wrote: