Dear Jackie,
127.0.01 points to localhost, which will work only on your computer.
Best regards,
ir. Thierry Onkelinx
Instituut voor natuur- en bosonderzoek / Research Institute for Nature
and Forest
team Biometrie & Kwaliteitszorg / team Biometrics & Quality Assurance
Kliniekstraat 25
1070 Anderlecht
Belgium
To call in the statistician after the experiment is done may be no
more than asking him to perform a post-mortem examination: he may be
able to say what the experiment died of. ~ Sir Ronald Aylmer Fisher
The plural of anecdote is not data. ~ Roger Brinner
The combination of some data and an aching desire for an answer does
not ensure that a reasonable answer can be extracted from a given body
of data. ~ John Tukey
2016-03-16 20:56 GMT+01:00 Jackie Wood <jackiewood7 at gmail.com>:
Thanks, Jacquelyn and Ben. Jacquelyn, did you mean to attach some code
just reference the site that Ben did? I had seen Ben's comments on
StackOverflow about potential false convergence messages, so I'll dig a
deeper. I just wanted to make sure it wasn't something obvious that I
From what I've read online, glmmPQL is inappropriate with Bernoulli
trials.
Is that correct?
Chris
On Wed, Mar 16, 2016 at 2:35 PM, Ben Bolker <bbolker at gmail.com> wrote:
Good question.
I'm afraid that for data sets ~ 100,000 observations or bigger, our
convergence calculations aren't terribly reliable -- see e.g. the
follow Jackie's advice ...
On 16-03-16 02:24 PM, Jackie Wood wrote:
Hi Chris,
Try checking ?convergence....coincidentally, I was having a similar
problem
just yesterday. There are some step by step
instructions for trouble shooting/double checking convergence
For
example, a bit of example code is provided to run your model using a
number
of different optimizers. If all optimizers yield similar values, it's
possible that you could be getting false convergence warnings. I'm
sure
if that's the case with your data, but it might be a place to start!
Jacquelyn
On Wed, Mar 16, 2016 at 1:56 PM, Christopher David Desjardins <
cddesjardins at gmail.com> wrote:
I am trying to fit a mixed effects binomial model.
The data consists of
- A dependent variable consisting of Bernoulli trials (outcome)
- A time variable (time), which has been mean centered
- An id variable (id)
- A categorical covariate (cat_cov)
- A blocking variable (block) which id is nested in. I realize in
model
below that it should be (1 | id/block) but I am just trying to
troubleshoot
my problem at the moment.
When I run the following:
example_data <- read.csv("
header = T)
example_data$cat_cov <- as.factor(example_data$cat_cov)
example_data$id <- as.factor(example_data$id)
example_data$block <- as.factor(example_data$block)
main_effects <- glmer(outcome ~ 1 + cat_cov + time + I(time^2) + (1
id),
data = example_data, family = "binomial")
That last line of code gives a warning message:
main_effects <- glmer(outcome ~ 1 + cat_cov + time + I(time^2) + (1
id), data = example_data, family = "binomial")
Warning messages:
1: In checkConv(attr(opt, "derivs"), opt$par, ctrl =
:
Model failed to converge with max|grad| = 4.36001 (tol = 0.001,
component
1)
2: In checkConv(attr(opt, "derivs"), opt$par, ctrl =
:
Model is nearly unidentifiable: very large eigenvalue
- Rescale variables?
I am not exactly sure how to proceed. I know the issue is with
though it's unclear to me why. If I swap out in a different
covariate in the model, not included in that data set, I don't get
message. I am not running into complete separation with cat_cov.
I'm a
little perplexed.
Any advice on what I should do or something I could look at it