Skip to content
Back to formatted view

Raw Message

Message-ID: <CAFUVuJw-6F8GA8HgX3NAV_tSqgzYzrOEDRa5ukmw0vZoKMgtDQ@mail.gmail.com>
Date: 2023-07-25T18:37:24Z
From: James Pustejovsky
Subject: [R-meta]  Questions regarding REML and FE models and R^2 calculation in metafor
In-Reply-To: <CA+4peqEPWwitJ35TkzSu+UVdx=7TB4L2bor6KJ5Ws4S=tJ7Tww@mail.gmail.com>

Hi Nevo,

Responses inline below.

Kind Regards,
James

On Tue, Jul 25, 2023 at 1:37?AM Nevo Sagi <nevosagi8 at gmail.com> wrote:

> I don't understand the rationale of using random effects at the experiment
> level. Experiments in my meta-analysis are parallel to observations in a
> conventional statistical analysis.
>

I think this analogy doesn't follow. Conventional statistical analysis does
have observation-level error terms (i.e., level-1 error)--it's just
included by default as part of the model. In meta-analytic models, these
errors are not included unless explicitly specified.


> What is the meaning of using random effects at the observation level?
>

Observation-level random effects here are used to capture heterogeneity of
effects across the experiments nested within a study. Considering that
you're interested in looking at moderators that vary across the experiments
reported in the same reference, it seems useful to attend to heterogeneity
at this level as well.


> In my understanding, by using random effects at the Reference level, I
> already tell the model to look at within-reference variation.
>

This is not correct. Including reference-level random effects captures
_between-reference_ variation (or heterogeneity) of effects.


> In fact, the reason I was thinking to omit the random effect is because
> the model was over-sensitive to variation in effect size across moderator
> levels within specific references, while I am more interested in the total
> variation across the whole moderator spectrum, and therefore I want to
> focus more on the between-reference variation.
> Does that make sense?
>

I stand by my original recommendation to consider including
experiment-level heterogeneity here. Omitting the experiment-level
heterogeneity more-or-less corresponds to averaging the effect size
estimates together so that you have one effect per reference, which will
tend to conceal within-reference heterogeneity. In fact, if you are using a
model that does not include moderators / predictors that vary at the
experiment level (within reference), then the correspondence is exact.
Further details here: https://osf.io/preprints/metaarxiv/pw54r/

	[[alternative HTML version deleted]]