Drop the correlation bet. random effects to find those with small variance
I think this has been answered implicitly in some of the answers to your other questions, but the bottom line is The issue is the number of parameters compared to the amount of data / information in that data. By setting the correlations to zero, you're greatly reducing the number of parameters you have to estimate, which leaves more information left over for estimating the other parameters. This does impact shrinkage, but the resulting model fits are still typically more stable (less variance across fits) than an overparameterized model (less bias because you're not forcing any parameter to a particular value). In other words, it's an example of the bias-variance tradeoff. In other words: it's often better to reduce model complexity, even at the cost of real-world fidelity, in order to avoid overfitting. Phillip
On 30/9/20 9:57 pm, Simon Harmel wrote:
Dear All, Bates, et al. (2015) <https://arxiv.org/pdf/1506.04967.pdf> mention that to identify a mixed-model with a singular variance-covariance matrix we can: Fit a zero correlation parameter which will identify random effects with zero, or very small, variance That is, going from `m0` to `m1` (see below). BUT, how come dropping all correlations between slopes and intercepts can lead to identifying random effects with zero, or very small, variance? library(lme4) dat <- read.csv(' https://raw.githubusercontent.com/WRobertLong/Stackexchange/master/data/singular.csv ') m0 <- lmer(y ~ A * B * C + (A * B * C | group), data = dat) m1 <- lmer(y ~ A * B * C + (A * B * C || group), data = dat) [[alternative HTML version deleted]]
_______________________________________________ R-sig-mixed-models at r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models