My readers are likely to want to see a p-value on the only random effect (an intercept) in my logistic GLMM. Supposedly, if I fit the model using Laplace approximation, then the likelihood is comparable with that of the fixed-effects model, so the p-value from a LRT (divided by two) can be used. But I don't trust the Laplace approximation much. I'd rather use at least 10 quadrature points for improved accuracy. This also results in a more flattering (smaller) random-effect variance and hence a lower reported intraclass correlation. But if I use any more than 1 quadrature point, I can no longer report a p-value on the random effect because *anova()* refuses to compare the models, citing incomparable likelihood functions. I thought of calculating the log-likelihood of the GLMM manually using *dbinom()*, the data and the fitted values, but this thread <https://stats.stackexchange.com/questions/381085/calculating-log-likelihood-of-logistic-adaptive-quadrature-glmm-for-comparison-w> says I can't use the binomial PMF for that. Is there a way I can have my cake (many quadrature points) and eat it too (get a p-value for the random effect)? That parametric bootstrap procedure sounds neat, but I'd still be running into the same problem: the LRT calculated at each iteration compares a fixed and a mixed model, hence the likelihoods cannot be compared.
LRT between GLMM and GLM to test a Single Random Intercept
2 messages · Juho Kristian Ruohonen, Thierry Onkelinx
Dear Juho, I'd take a step back and think on why you add the random intercept. Is it clearly a part of the design? E.g. it takes repeated measures into account. Then you need the term in the model, what ever the p-value. The variance of the random effect indicates its importance. Best regards, ir. Thierry Onkelinx Statisticus / Statistician Vlaamse Overheid / Government of Flanders INSTITUUT VOOR NATUUR- EN BOSONDERZOEK / RESEARCH INSTITUTE FOR NATURE AND FOREST Team Biometrie & Kwaliteitszorg / Team Biometrics & Quality Assurance thierry.onkelinx at inbo.be Havenlaan 88 bus 73, 1000 Brussel www.inbo.be /////////////////////////////////////////////////////////////////////////////////////////// To call in the statistician after the experiment is done may be no more than asking him to perform a post-mortem examination: he may be able to say what the experiment died of. ~ Sir Ronald Aylmer Fisher The plural of anecdote is not data. ~ Roger Brinner The combination of some data and an aching desire for an answer does not ensure that a reasonable answer can be extracted from a given body of data. ~ John Tukey /////////////////////////////////////////////////////////////////////////////////////////// <https://www.inbo.be> Op wo 12 dec. 2018 om 06:37 schreef Juho Kristian Ruohonen < juho.kristian.ruohonen at gmail.com>:
My readers are likely to want to see a p-value on the only random effect (an intercept) in my logistic GLMM. Supposedly, if I fit the model using Laplace approximation, then the likelihood is comparable with that of the fixed-effects model, so the p-value from a LRT (divided by two) can be used. But I don't trust the Laplace approximation much. I'd rather use at least 10 quadrature points for improved accuracy. This also results in a more flattering (smaller) random-effect variance and hence a lower reported intraclass correlation. But if I use any more than 1 quadrature point, I can no longer report a p-value on the random effect because *anova()* refuses to compare the models, citing incomparable likelihood functions. I thought of calculating the log-likelihood of the GLMM manually using *dbinom()*, the data and the fitted values, but this thread < https://stats.stackexchange.com/questions/381085/calculating-log-likelihood-of-logistic-adaptive-quadrature-glmm-for-comparison-w
says I can't use the binomial PMF for that.
Is there a way I can have my cake (many quadrature points) and eat it too
(get a p-value for the random effect)? That parametric bootstrap procedure
sounds neat, but I'd still be running into the same problem: the LRT
calculated at each iteration compares a fixed and a mixed model, hence the
likelihoods cannot be compared.
[[alternative HTML version deleted]]
_______________________________________________ R-sig-mixed-models at r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models