Skip to content
Prev 20532 / 20628 Next

Questions on Penalization and Prior Variance in bglmer for data with Complete Separation

Is it based on maximum likelihood, or does it use Markov-Chain Monte Carlo
(MCMC) methods?
and generalized linear mixed-effects models in a Bayesian setting,
implementing the methods of Chung, et al. (2013)
<doi:10.1007/s11336-013-9328-2>".

So it applies the priors you specify and then maximizes the posterior.
a frequentist approach?

Sort of? If your sample size is large then then the contributions of the
priors vanish and the algorithm will converge to the same place as maximum
likelihood (unless that fails to converge because the parameters are truly
at the boundary of the space). However, if you already have a well-defined
problem where the maximum likelihood estimate exists and is stable, you're
probably not going to be using blme. In that case, there really isn't a
coherent interpretation to the standard errors blme returns beyond a
measure of the curvature at the posterior mode, which corresponds to a
normal approximation. That generally isn't considered to be very
meaningful, because the posterior can be far from normal. For models where
lmer fails to converge, blme is intended to allow the researcher to rapidly
prototype solutions before fitting in an MCMC Bayesian setting where
posterior credible intervals can be obtained.
value?

I suppose that depends on your goal. If you're just trying to
penalize/regularize a model, then using a weakly informative prior based on
the built-in scale of the logistic function is sufficient (the default). If
you actually have substantive information you want to incorporate, then
treat your posterior uncertainty from that as your prior uncertainty in a
new model. However, in my mind that would largely apply if there is a
meta-analysis you can leverage or if the data you are analyzing are part of
a larger dataset that has been partially analyzed.
Ind contains no repeated values, so the random effects can cause perfect
prediction. e.g.

random_intercepts <- ranef(m1)$Ind
cor(
  random_intercepts[match(Data$Ind,
as.integer(row.names(random_intercepts))),],
  Data$Response
)
[1] 0.9999599

You probably want to fit a glm instead for this example, which I believe
gives you the results you are expecting to see.

Cheers,
Vince

On Thu, Feb 6, 2025 at 8:50?PM Tom?s Manuel Chialina <tmanuelch at gmail.com>
wrote:

  
  
Message-ID: <CA+++UwRwcYffBK-DYSyLpfTzms7fxK45f3=7spRiHgbxRcCw6g@mail.gmail.com>
In-Reply-To: <CP4P284MB1556A3D9E228FDFBC3CE7E89F8F12@CP4P284MB1556.BRAP284.PROD.OUTLOOK.COM>