Global effect sizes are ALWAYS dodgy for measuring strength because they rely on ASSUMPTIONS about pooled variance.
At the end of the day investigators want to know about specific binary comparisons.
These may be a priori if investigators already have a prediction or post hoc if they follow from considering obtained means.
An alternative, and IMHO better, approach is to use global fit measures to choose best model of all possible pairs after ordering means, and then make relevant pairwise comparisons,
Simple e.g. only repeated measures: create a single between repeated factor R consider models
M1 r1>r2>r3
M2 r1>r2=r3
M3 r1=r2>r3
M4 r1=r2=r3
See which model is best using global fit WAIC preferred, but AIC or BIC will almost certainly give same ordering of models. Only consider effect size for relevant pairwise contrast in best models. The other are irrelevan.
Recalculatee postdoc and only the sd relevant to that poor will be used. Do NOT use contrast form factorial analysis as that will bring in all the hairy pooled variance assumption. As it is pairwise repeated effectively a single difference SD is relevant. So no assumptions
If its a single between factor then then I would recommend heterogeneous variance pairwise test.
Its more complicated if both between and repeated. Consider order
B1r1, B2r2, B2r1, B1r2 giving models
M1: B1r1>B2r2, between comparisons
B2r2>B2r1 repeated comparisons
B2r1>B1r2 between comparisons
M2: B1r1=B2r2, no comparisons
B2r2>B2r1 repeated comparison
B2r1>B1r2 between comparisons
Etc.
Keselman, H. J., Cribbie, R. A., & Holland, B. (2004). Pairwise multiple comparison test procedures: An update for clinical child and adolescent psychologists [Article]. Journal of Clinical Child and Adolescent Psychology, 33(3), 623-645. https://doi.org/10.1207/s15374424jccp3303_19
Cribbie, R. A., & Keselman, H. J. (2003). The effects of nonnormality on parametric, nonparametric, and model comparison approaches to pairwise comparisons [Article]. Educational and Psychological Measurement, 63(4), 615-635. https://doi.org/10.1177/0013164403251283
Cribbie, R. A., & Keselman, H. J. (2003). Pairwise multiple comparisons: A model comparison approach versus stepwise procedures. British Journal of Mathematical and Statistical Psychology, 56(1), 167-182. https://doi.org/10.1348/000711003321645412
IN other words if design compares UK , US, Australia why should sd of US effect difference between UK and Australia?
If design gives coffee , orange juice and whisky to same people [appropriately counterbalanced] why should sd on whisky be involved in comparison of orange juice and coffee.
Work of Keeselman and Cribbie not getting attention it deserves, IMHO.
best
Diana
On 25 Sep 2020, at 08:18, r-sig-mixed-models-request at r-project.org<mailto:r-sig-mixed-models-request at r-project.org> wrote:
Send R-sig-mixed-models mailing list submissions to r-sig-mixed-models at r-project.org<mailto:r-sig-mixed-models at r-project.org> To subscribe or unsubscribe via the World Wide Web, visit https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models or, via email, send a message with subject or body 'help' to r-sig-mixed-models-request at r-project.org You can reach the person managing the list at r-sig-mixed-models-owner at r-project.org When replying, please edit your Subject line so it is more specific than "Re: Contents of R-sig-mixed-models digest..." Today's Topics: 1. Re: Calculating effect sizes of fixed effects in lmer (=?UTF-8?Q?Daniel_L=C3=BCdecke?=) 2. Meaning of Corr of random-effects with a cross-level interaction (Simon Harmel) 3. mixed model with recapture data (Leandro Rabello Monteiro) 4. Re: mixed model with recapture data (Thierry Onkelinx) ---------------------------------------------------------------------- Message: 1 Date: Thu, 24 Sep 2020 17:30:30 +0200 From: =?UTF-8?Q?Daniel_L=C3=BCdecke?= <d.luedecke at uke.de> To: 'FAIRS Amie' <amie.FAIRS at univ-amu.fr>, 'James Pustejovsky' <jepusto at gmail.com> Cc: <r-sig-mixed-models at r-project.org> Subject: Re: [R-sig-ME] Calculating effect sizes of fixed effects in lmer Message-ID: <004e01d69287$a3880910$ea981b30$@uke.de> Content-Type: text/plain; charset="utf-8" Dear Amie, as additional comment to what has been said so far, I'd like to point to this forum post, which describes why it is difficult to get effect sizes like eta squared etc. from mixed models: https://afex.singmann.science/forums/topic/compute-effect-sizes-for-mixed-objects#post-295 Standardized coefficients are one possibility to report some kind of "effect size". The most accurate way would be standardizing the data before fitting the model (in particular when interaction terms are involved). Although I agree that having the "raw", unstandardized coefficients may provide a more intuitive interpretation, standardizing is sometimes even required just due to problem when fitting the model (like convergence issues). Beyond that, you can - always having the caveats (especially) for mixed models in mind! - compute effect sizes like eta squared etc., and standardized coefficients with different methods of standardizing (posthoc as described by Wolfgang, or "refitting" the model on standardized version of the data) with the "effectsize" package: https://cran.r-project.org/package=effectsize There is also a dedicated webpage: https://easystats.github.io/effectsize/ Furthermore, the package just recently implemented a function for "pseudo-standardization" of parameters in mixed models. This approach addresses the issue raised by Wolfgang that mixed models have different sources of variability, and thus sd(y) would not properly account for this. Hope this helps. Best wishes Daniel -----Urspr?ngliche Nachricht----- Von: R-sig-mixed-models <r-sig-mixed-models-bounces at r-project.org> Im Auftrag von FAIRS Amie Gesendet: Donnerstag, 24. September 2020 17:01 An: James Pustejovsky <jepusto at gmail.com> Cc: r-sig-mixed-models at r-project.org Betreff: Re: [R-sig-ME] Calculating effect sizes of fixed effects in lmer Dear James, Thank you so much ! I?ll check out all the references and your R package. Best, Amie ------------------ Dr. Amie Fairs Post-doctorant Aix-Marseille Universit? Laboratoire Parole et Langage (LPL) | CNRS UMR 7309 | 5 Avenue Pasteur | 13100 Aix-en-Provence Email : amie.fairs at univ-amu.fr<mailto:amie.fairs at univ-amu.fr> While I may send this email outside of typical working hours, I have no expectation to receive an email outside of your typical hours. From: James Pustejovsky <jepusto at gmail.com> Sent: 24 September 2020 16:58 To: FAIRS Amie <amie.FAIRS at univ-amu.fr> Cc: Viechtbauer, Wolfgang (SP) <wolfgang.viechtbauer at maastrichtuniversity.nl>; r-sig-mixed-models at r-project.org Subject: Re: [R-sig-ME] Calculating effect sizes of fixed effects in lmer Hi Amie, I agree very much with Wolfgang's perspective that one would ideally use outcomes such that unstandardized effects can be interpreted directly. If one does have to fall back on standardized effect sizes, there's a further question of what metric to use. Researchers often jump immediately to standardized mean differences, but there are certainly other possibilities, such as log response ratios for outcomes that are measured on ratio scales. All that said, there has been a fair amount of work on standardized mean difference effect sizes for certain types of research designs that would usually be analyzed with multi-level models. A sampling (including some of my own): * Hedges, L. V. (2007). Effect sizes in cluster-randomized designs. Journal of Educational and Behavioral Statistics, 32(4), 341-370. * Hedges, L. V. (2011). Effect sizes in three-level cluster-randomized experiments. Journal of Educational and Behavioral Statistics, 36(3), 346-380. * Pustejovsky, J. E., Hedges, L. V., & Shadish, W. R. (2014). Design-comparable effect sizes in multiple baseline designs: A general modeling framework. Journal of Educational and Behavioral Statistics, 39(5), 368-393. * Stapleton, L. M., Pituch, K. A., & Dion, E. (2015). Standardized effect size measures for mediation analysis in cluster-randomized trials. The Journal of Experimental Education, 83(4), 547-582. * Feingold, A. (2009). Effect sizes for growth-modeling analysis for controlled clinical trials in the same metric as for classical analysis. Psychological Methods, 14(1), 43. One of my students and I have also developed an R package for estimating standardized mean differences from multilevel models fitted with nlme::lme() https://CRAN.R-project.org/package=lmeInfo Kind Regards, James _______________________________________________ R-sig-mixed-models at r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models -- _____________________________________________________________________ Universit?tsklinikum Hamburg-Eppendorf; K?rperschaft des ?ffentlichen Rechts; Gerichtsstand: Hamburg | www.uke.de Vorstandsmitglieder: Prof. Dr. Burkhard G?ke (Vorsitzender), Joachim Pr?l?, Prof. Dr. Blanche Schwappach-Pignataro, Marya Verdel _____________________________________________________________________ SAVE PAPER - THINK BEFORE PRINTING ------------------------------ Message: 2 Date: Thu, 24 Sep 2020 11:38:17 -0500 From: Simon Harmel <sim.harmel at gmail.com> To: r-sig-mixed-models <r-sig-mixed-models at r-project.org> Subject: [R-sig-ME] Meaning of Corr of random-effects with a cross-level interaction Message-ID: <CACgv6yXwshN5OViou9xtaDtiFdWU1NmELKsToan8kxFUw1xgFg at mail.gmail.com> Content-Type: text/plain; charset="utf-8" Dear All, I had a quick question. I have a cross-level interaction in my model below (ses*sector). My cluster-level predictor "sector" is a binary variable (0=Public, 1=Private). My level-1 predictor is numeric. QUESTION: The `Corr = 1` is indicating the correlation between intercepts and slopes across BOTH public & private sectors (like their average) OR something else? hsb <- read.csv(' https://raw.githubusercontent.com/rnorouzian/e/master/hsb.csv') summary(lmer(math ~ ses*sector + (ses|sch.id), data = hsb)) Random effects: Groups Name Variance Std.Dev. Corr sch.id (Intercept) 3.82107 1.9548 ses 0.07587 0.2754 1.00 Residual 36.78760 6.0653 ------------------------------ Message: 3 Date: Thu, 24 Sep 2020 17:27:58 -0300 From: Leandro Rabello Monteiro <lrmont at uenf.br> To: r-sig-mixed-models at r-project.org Subject: [R-sig-ME] mixed model with recapture data Message-ID: <CA+xt272Kj=UXiVBLyURBse0Y7=FwCX3J1xTmkvhJqSqErGR2LA at mail.gmail.com> Content-Type: text/plain; charset="utf-8" Dear All I am trying to evaluate the body condition (SMI) of bats in a mark-recapture study, in response to lesions caused by arm bands. Because recapture is a matter of chance, the design is highly unbalanced. Most individuals were recaptured twice, but there can be up to 18 recaptures in a period of 4 years. The data set is formatted in a way that each line is one individual at a point in time. The head() of the data frame looks like this ID Sex SMI MarkR YearMonth 1 1 M 15.10700 L0 2013-04 2 1 M 14.52348 L0 2013-06 3 1 M 15.51033 L0 2013-07 4 1 M 15.51033 L0 2013-09 5 1 M 15.26151 L0 2013-11 6 1 M 15.33953 L0 2014-08 ID is a factor to identify individuals, MarkR (response to banding) is a factor with levels (NR = no ring, the first capture, L0 = ringed, no lesion, L1 = lesion type 1, L2 = lesion type 2). A single individual can change its level in MarkR, so it is a within-subject fixed factor. Some individuals will develop lesions and some will not. The question of interest is whether banding itself or lesions caused by banding can be associated with lower SMI, so the only comparisons of interest are the levels L0-2 against the "control" NR. Lesions, particularly L2 are rare, occurring in ~3% of observations (out of 2400), again with a high unbalance among levels. There is some seasonality in body condition, but I am not particularly interested in this aspect right now, but I am not sure about the best way to include the temporal factor YearMonth it in the model. I have tried the following, using individuals and YearMonth as random effects. lm.smi<-lmer(SMI~Sex*MarkR+(1|ID)+(1|YearMonth),data=smi) I would appreciate some guidance as to whether I might be missing something relevant, particularly due to the highly unbalanced design. I have searched a lot but have not managed to find similar examples in the literature or the web. Thanks a lot for your time. ################################################## Leandro R. Monteiro Laboratorio de Ciencias Ambientais Universidade Estadual do Norte Fluminense E-mail: lrmont at uenf.br CV Lattes: http://lattes.cnpq.br/4987216474124557 WS: https://sites.google.com/uenf.br/ecol-evolucao-de-mamiferos/ English WS: https://sites.google.com/uenf.br/mammalecologyandevolution/ ################################################## ------------------------------ Message: 4 Date: Fri, 25 Sep 2020 09:18:03 +0200 From: Thierry Onkelinx <thierry.onkelinx at inbo.be> To: Leandro Rabello Monteiro <lrmont at uenf.br> Cc: r-sig-mixed-models <r-sig-mixed-models at r-project.org> Subject: Re: [R-sig-ME] mixed model with recapture data Message-ID: <CAJuCY5w9DFOKKAnxSGwhu-4SE1sYoMVAgBCp+FkajmGGKUYLUw at mail.gmail.com> Content-Type: text/plain; charset="utf-8" Dear Leandro, You could consider splitting the time effect into a year effect and a month effect. This will assume that every year has the same seasonal pattern. Add year as a fixed effect factor if your data spans only a few years. lm.smi <- lmer(SMI ~ Sex * MarkR + Year + (1 | ID) + (1 | Month), data = smi) The bats in our region are hibernating. Their body condition peaks in the early autumn and is low in early spring. You can model such a pattern with e.g. a sine wave as fixed effect and a random effect to model the deviations from the sine wave. Month_rad <- 2 * pi * Month / 12 sin(Month_rad) + cos(Month_rad) + (1 | Month) Notethataddingspacestotextmakesitmuchmorereadable.Thesamegoesforcode. Best regards, ir. Thierry Onkelinx Statisticus / Statistician Vlaamse Overheid / Government of Flanders INSTITUUT VOOR NATUUR- EN BOSONDERZOEK / RESEARCH INSTITUTE FOR NATURE AND FOREST Team Biometrie & Kwaliteitszorg / Team Biometrics & Quality Assurance thierry.onkelinx at inbo.be Havenlaan 88 bus 73, 1000 Brussel www.inbo.be /////////////////////////////////////////////////////////////////////////////////////////// To call in the statistician after the experiment is done may be no more than asking him to perform a post-mortem examination: he may be able to say what the experiment died of. ~ Sir Ronald Aylmer Fisher The plural of anecdote is not data. ~ Roger Brinner The combination of some data and an aching desire for an answer does not ensure that a reasonable answer can be extracted from a given body of data. ~ John Tukey /////////////////////////////////////////////////////////////////////////////////////////// <https://www.inbo.be> Op do 24 sep. 2020 om 22:47 schreef Leandro Rabello Monteiro <lrmont at uenf.br : Dear All I am trying to evaluate the body condition (SMI) of bats in a mark-recapture study, in response to lesions caused by arm bands. Because recapture is a matter of chance, the design is highly unbalanced. Most individuals were recaptured twice, but there can be up to 18 recaptures in a period of 4 years. The data set is formatted in a way that each line is one individual at a point in time. The head() of the data frame looks like this ID Sex SMI MarkR YearMonth 1 1 M 15.10700 L0 2013-04 2 1 M 14.52348 L0 2013-06 3 1 M 15.51033 L0 2013-07 4 1 M 15.51033 L0 2013-09 5 1 M 15.26151 L0 2013-11 6 1 M 15.33953 L0 2014-08 ID is a factor to identify individuals, MarkR (response to banding) is a factor with levels (NR = no ring, the first capture, L0 = ringed, no lesion, L1 = lesion type 1, L2 = lesion type 2). A single individual can change its level in MarkR, so it is a within-subject fixed factor. Some individuals will develop lesions and some will not. The question of interest is whether banding itself or lesions caused by banding can be associated with lower SMI, so the only comparisons of interest are the levels L0-2 against the "control" NR. Lesions, particularly L2 are rare, occurring in ~3% of observations (out of 2400), again with a high unbalance among levels. There is some seasonality in body condition, but I am not particularly interested in this aspect right now, but I am not sure about the best way to include the temporal factor YearMonth it in the model. I have tried the following, using individuals and YearMonth as random effects. lm.smi<-lmer(SMI~Sex*MarkR+(1|ID)+(1|YearMonth),data=smi) I would appreciate some guidance as to whether I might be missing something relevant, particularly due to the highly unbalanced design. I have searched a lot but have not managed to find similar examples in the literature or the web. Thanks a lot for your time. ################################################## Leandro R. Monteiro Laboratorio de Ciencias Ambientais Universidade Estadual do Norte Fluminense E-mail: lrmont at uenf.br CV Lattes: http://lattes.cnpq.br/4987216474124557 WS: https://sites.google.com/uenf.br/ecol-evolucao-de-mamiferos/ English WS: https://sites.google.com/uenf.br/mammalecologyandevolution/ ################################################## _______________________________________________ R-sig-mixed-models at r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models ------------------------------ Subject: Digest Footer _______________________________________________ R-sig-mixed-models mailing list R-sig-mixed-models at r-project.org https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models ------------------------------ End of R-sig-mixed-models Digest, Vol 165, Issue 22 *************************************************** ____________ University of Hertfordshire College Lane, Hatfield, Hertfordshire AL10 9AB, UK +44 (0) 208 444 2081 +44 (0) 7403 18 16 12 d.e.kornbrot at herts.ac.uk<mailto:d.e.kornbrot at herts.ac.uk> http://dianakornbrot.wordpress.com/ skype: kornbrotme Save our in-boxes! http://emailcharter.org __________________