Skip to content

[R-meta] Meta Analysis on interaction effects in different designs

2 messages · Selma Rudert, James Pustejovsky

#
Hi,

first of all, I need to diclose that I am pretty much a newbie to meta-analysis. I am familiar with the general idea and the procedure of a ?simple? meta analysis. But I have an issue that seems to highly specific to me that I wasn?t able to find an answer in the literature or a previous posting, so I?d be happy for expert opinions.

I currently have a manuscript in peer review in whic the Editor asked us to do a mini meta-analysis over the four studies in the paper. All four studies use the same DV and manipulate the same factors, however, they differ in the implemented design:

In Studies 1 and 2, two of the factors (A and B) are manipulated between-subject and one within-subject (C). That is, each participant gives two responses in total. 

In Studies 3 and 4, participants rate 40 stimuli that differ on two on factors A and B. Again, each stimulus is rated twice (factor C). So each participant gives 80 responses in total, the variables of interest are assessed within-subject and to analyze data. To analze, we used a linear mixed model/multilevel model with stimuli and participant as random factors. 

The critical effect that the Editor is interested in is a rather complex three-way interaction AxBxC. Is it appropriate to summarize effect sizes of interaction terms in a meta analysis? From a former post on this mailing list I assumed it can be done: https://stat.ethz.ch/pipermail/r-sig-meta-analysis/2018-February/000658.html <https://stat.ethz.ch/pipermail/r-sig-meta-analysis/2018-February/000658.html> However, I am wondering whether it is appropriate to combine effect sizes given the differences in the designs (between subjects elements in S1 and 2 but not in S3 and S4) and the different metric of the effect sizes (in the LMM in Studies 3 and 4 we use an approximation of d as suggested by Westfall (2014) that uses the  the estimated variance components instead of the standard deviation). I read Morris& DeShon (2002) and understood that it is a potential problem to combine effect sizes that do not use the same metric - unfortunately, the transformation they suggest refers to combining effect sizes derived from comparisons of independent groups vs. repeated measures design and does not extend to linear mixed models.

One idea that I had is to follow the approach of Goh, Hall & Rosenthal (2016) for a random effects approach (which would mean to basically avagerage effect sizes and just ignore all differences in design, metric etc.) I?d be thankful for  any thoughts on whether such a meta analysis can and should  reasonably be done, any alternative suggestions, or whether due to the differences between the designs, it would be advisable to stay away from it.

Best,

Selma
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://stat.ethz.ch/pipermail/r-sig-meta-analysis/attachments/20210215/7297e65d/attachment.html>

-------------- next part --------------
A non-text attachment was scrubbed...
Name: smime.p7s
Type: application/pkcs7-signature
Size: 5257 bytes
Desc: not available
URL: <https://stat.ethz.ch/pipermail/r-sig-meta-analysis/attachments/20210215/7297e65d/attachment.p7s>
11 days later
#
Hi Selma,

This is a tricky question, which may be why you haven't received any
response from the listserv. To provide a partial answer:
1) Yes, in principle it's find to meta-analyze interaction terms.
2) The point of using standardized effect sizes is to provide a means of
putting effects on a commensurable scale. But you presumably have the raw
data from these studies, which opens up further possibilities besides using
standardized mean differences. How are the outcome measurements in studies
1 and 2 related to the outcomes in studies 3 and 4? Perhaps there would be
some other way of equating them.

For instance, say that studies 1 and 2 use two binary outcomes whereas
studies 3 and 4 use 20 binary outcomes per condition. However, for a given
participant, every binary outcome has the same probability of being
positive. That is, in
- In study 1 and 2: Yi ~ Binomial(2, p_i)
- In study 3 and 4: Yi ~ Binomial(20, p_i)
This would suggest that if you put all of the outcomes on a [0,1] scale,
then you can treat the effects (whether main effects or interactions) as
changes in average probabilities across participants.

3) If you can find a way to equate the outcomes across all four studies,
then you could consider two different ways of synthesizing the effects. One
would be to put the outcomes on the common scale and then estimate the
interaction terms (and standard errors) from each study. Then average the
interaction terms together using fixed effect meta-analysis. Another
approach would be to pool all of the raw data together (across all four
studies) and develop one joint model for the main effects and interaction
terms. The latter approach is sometimes called "integrative data analysis."
See here for all the details:
Curran, P. J., & Hussong, A. M. (2009). Integrative data analysis: the
simultaneous analysis of multiple data sets. *Psychological Methods*, *14*(2),
81.

James
On Mon, Feb 15, 2021 at 11:28 AM Selma Rudert <rudert at uni-landau.de> wrote: