Message-ID: <DM5PR1901MB20077F8E7E040F1E594CF722EAEA0@DM5PR1901MB2007.namprd19.prod.outlook.com>
Date: 2020-02-28T07:18:51Z
From: Ades, James
Subject: Most principled reporting of mixed-effect model regression coefficients
In-Reply-To: <001a01d5ecbc$63c884b0$2b598e10$@uke.de>
Thanks, Daniel and Paul.
Yes, I did read that conditional R^2 is higher. From the N&S article, it seems that it represents variance explained by fixed and random factors. Still, depending on the outcome measure, it seems that there would exist a good deal of variance still unaccounted for even considering random factors.
Thanks for the synopses of different packages. I'll try them out and see whether they yield similar answers. It's also helpful to know the ways in which they differ for current and future use.
Regarding your last comment, I think the AIC does a good job of selection and parameter penalization (which is important to my focus), however, when it comes to comparing and communicating the differences between models using different performance measures, AIC, it seems, can really only say whether one model is better than another, but can't really say how much better. This is important, especially when you have to weigh pros and cons of different performance measures and metrics. For instance, with executive function tasks--some recent research demonstrates lack of test-retest reliability. If an aggregate is more reliable, but doesn't explain as much variance, then it's tough to answer which metric is better, which is where it would be nice to quantify the predictive capabilities differences between the models.