Skip to content

[R-meta] Dependent Measure Modelling Question

6 messages · Grace Hayes, James Pustejovsky

#
Dear James,

Thank you for your response to my previous query. Yes, the effect size estimates are statistically dependent. Therefore, as per your recommendation, I have read over a few tutorials that cover multivariate meta-analysis and robust variances estimations. Specifically, the one that you wrote about using club sandwich to run co-efficient tests followed by Wald-tests. This article was most helpful! I have a follow up question regards the use of the Wald-test, which I have outlined below.

My three potential moderators are: task_design (two levels), Emotion (6 levels) and StimuliType (5 levels). To test the moderating effect of each of these variables I ran the following:

allModerator <- rma.mv( yi, vi, mods = ~ task_design + Emotion + StimuliType, random = ~ 1 |  studyID/outcome/effectID, tdist = TRUE, data = dat)

coef_test(allModerator, vcov = "CR2")

#NUMBER OF EMOTIONS

Wald_test(allModerator, constraints = 2, vcov = "CR2")

#EMOTIONTYPE

Wald_test(allModerator, constraints = 3:7, vcov = "CR2")

#STIMULITYPE

Wald_test(allModerator, constraints = 8:11, vcov = "CR2")

The constraints for each Wald test match the coefficients related to each moderator, so I believe these tested for the significance of each moderator while adjusting for the other two moderating variables. However, I was also interested in variance across the estimated average effect produced by each stimuli format for each emotion. I followed the below guide by Wolfgang Viechtbauer, that showed how to parameterize the model to provide the estimated average effect for each factor level combinations.

http://www.metafor-project.org/doku.php/tips:multiple_factors_interactions

My model was:

StimulibyEmotion <- rma.mv(yi, vi, mods = ~ StimuliType:Emotion -1, random = ~ 1 |  studyID/outcome/effectID, tdist = TRUE, data=dat)

coef_test(StimulibyEmotion, vcov = "CR2")

Wolfgang then uses anovas to test factor level combination against each other. Can I use the Wald test to do this to my robust variance estimations?

Also, would it be possible for you to please elaborate on what you meant by "a model that allows for different heterogeneity levels for each emotion", or provide a link to an article demonstrating this? As a first time used of R and metafor, I wasn't sure how to go about this.

Many thanks,

Grace
1 day later
#
Grace,

To your first question, yes it is possible to use Wald_test to do "robust"
anovas for comparing factor level combinations. The interface works
similarly to anova(), but the constraints have to be provided in the form
of a matrix. Here is an example based on Wolfgang's tutorial:

library(metafor)
dat <- dat.raudenbush1985
dat$weeks <- cut(dat$weeks, breaks=c(0,1,10,100),
labels=c("none","some","high"), right=FALSE)
dat$tester <- relevel(factor(dat$tester), ref="blind")
res.i2 <- rma(yi, vi, mods = ~ weeks:tester - 1, data=dat)

# ANOVA with model-based variances
anova(res.i2, L=c(0,1,-1,0,0,0))
linearHypothesis(res.i2, c("weekssome:testerblind - weekshigh:testerblind =
0"))
anova(res.i2, L=c(0,0,0,0,1,-1))
linearHypothesis(res.i2, c("weekssome:testeraware - weekshigh:testeraware =
0"))

# Wald tests with RVE
library(clubSandwich)

# some vs. high, test = blind
Wald_test(res.i2, constraints = matrix(c(0,1,-1,0,0,0), nrow = 1),
          vcov = "CR2", cluster = dat$author)

# some vs. high, test = aware
Wald_test(res.i2, constraints = matrix(c(0,0,0,0,1,-1), nrow = 1),
          vcov = "CR2", cluster = dat$author)

To your second question about models that allow for differing levels of
heterogeneity, this tutorial from the metafor site discusses it a bit:
http://www.metafor-project.org/doku.php/tips:comp_two_independent_estimates?s[]=inner&s[]=outer

For your model, I think the syntax might be something along the lines of
the following:

StimulibyEmotion <-

  rma.mv(yi, vi, mods = ~ StimuliType:Emotion -1,

         random = list(~ 1 | studyID, ~ Emotion | outcome, ~ 1 | effectID),

         struct = "UN",

           tdist = TRUE, data=dat)


This model allows for varying levels of outcome-level heterogeneity,
depending on the emotion being assessed. The struct = "UN" argument
controls the assumption made about how the random effects for each emotion
co-vary within levels of an outcome. Just for sake of illustration, I've
assumed that the between-study heterogeneity is constant (~ 1 | studyID)
and the effect-level heterogeneity is also constant (~ 1 | effectID). I'm
not at all sure that this is the best (or even really an appropriate)
model. To get a sense of that, I think we'd need to know more about the
structure of your data, what's nested in what, and the distinction between
outcome and effectID.

Cheers,
James

On Mon, Mar 11, 2019 at 11:03 PM Grace Hayes <grace.hayes3 at myacu.edu.au>
wrote:

  
  
#
Thanks again James,

In terms of the structure of my data; emotion outcomes ('Emotion'), are nested in tasks ('StimuliType'), which are nested in studies ('StudyID).

The variable 'outcome' is one that I created that is a combination of the 'Emotion' and 'StimuliType' factors (i.e., DynamicAnger, StaticAnger, StaticDisgust), whereas the variable 'effectID' contains an unique identifier for each effect.

I created these variables and defined the random effects as, random = ~ 1 |  studyID/outcome/effectID, to account for the fact that some studies produced effects with the same factor combination (i.e., the same emotion from two tasks of the same stimuli type). Therefore, effects with the same factor combination ('outcome') but different studyID were independent, but effects with the same factor combination ('outcome') and the same studyID were dependent.

Perhaps then, to apply the inner|outer formula to my data I would need to instead use Emotion|effectID?

Cheers,
Grace



Grace Hayes

Psychologist | Doctor of Philosophy (PhD) Candidate

Cognition and Emotion Research Centre

School of Behavioural and Health Sciences, Faculty of Health Sciences

Australian Catholic University


[cid:1f00221a-6c23-48a9-a848-62733640adf4]


Level 3, Mary Glowrey Building,

115 Victoria Parade, Fitzroy, VIC 3065
T: +61 3 9230 8131
E: grace.hayes3 at myacu.edu.au
W: http://ccaer.acu.edu.au/
7 days later
#
Grace,

Sorry for the delay getting back to you. Your response is helpful in
clarifying the structure of your data, but I'm still not sure I follow why
you need the unique effectID in the model. Are there some studies where you
have multiple ES estimates for the same combination of emotion and task
(e.g., two measures of dynamic anger)?

James

On Wed, Mar 13, 2019 at 10:37 PM Grace Hayes <grace.hayes3 at myacu.edu.au>
wrote:
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://stat.ethz.ch/pipermail/r-sig-meta-analysis/attachments/20190321/259d47f7/attachment-0001.html>

-------------- next part --------------
A non-text attachment was scrubbed...
Name: Outlook-0omt2zgh.png
Type: image/png
Size: 8478 bytes
Desc: not available
URL: <https://stat.ethz.ch/pipermail/r-sig-meta-analysis/attachments/20190321/259d47f7/attachment-0001.png>
#
Hi James,

Yes that is correct, I have some studies with multiple ES estimates for the same combination of task and emotion.

Grace



Grace Hayes

Psychologist | Doctor of Philosophy (PhD) Candidate

Cognition and Emotion Research Centre

School of Behavioural and Health Sciences, Faculty of Health Sciences

Australian Catholic University


[cid:c12b1e43-4cc2-4e07-8368-91ee83c68e35]


Level 3, Mary Glowrey Building,

115 Victoria Parade, Fitzroy, VIC 3065
T: +61 3 9230 8131
E: grace.hayes3 at myacu.edu.au
W: http://ccaer.acu.edu.au/
#
Grace,

I see. This is quite a complex data structure, and I do not think there is
a single right answer for what random effects specification should be
used.  Without a single definitive model specification, I think the thing
to do would be to explore a range of models and compare their fit. Others
on the listserv might have better suggestions about how to conduct and
report this sort of model-building exercise. I'll offer a few highly
speculative suggestions. Your initial specification,

A:    random = ~ 1 |  studyID/outcome/effectID

seems quite reasonable as a starting point. Other specifications that you
might explore would allow the between-study heterogeneity to vary depending
on the emotion, task, or combination of emotion and task. If you had a
large number of studies, all of which reported every combination of emotion
and task, a very general specification would be

B:    random = list(~ outcome |  studyID, ~ 1 | effectID), struct = "UN"

But this model might be hard to fit when studies each use only a few
combinations of emotions and tasks. You could try allowing the
between-study heterogeneity to vary by emotion but not by task:

C:    random = list(~ emotion |  studyID, ~ 1 | effectID), struct = "UN"


Or vice versa:

D:    random = list(~ task |  studyID, ~ 1 | effectID), struct = "UN"

For (C), you could also include random effects per task nested within
studyID, but you'd need to create a taskID variable that takes on different
values for every study. Similarly for (D), you could also include random
effects per emotion nested within studyID by creating an emotionID variable
that takes on different values for every study.

James





On Thu, Mar 21, 2019 at 11:53 PM Grace Hayes <grace.hayes3 at myacu.edu.au>
wrote:
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://stat.ethz.ch/pipermail/r-sig-meta-analysis/attachments/20190322/d939bcec/attachment-0001.html>

-------------- next part --------------
A non-text attachment was scrubbed...
Name: Outlook-0omt2zgh.png
Type: image/png
Size: 8478 bytes
Desc: not available
URL: <https://stat.ethz.ch/pipermail/r-sig-meta-analysis/attachments/20190322/d939bcec/attachment-0002.png>

-------------- next part --------------
A non-text attachment was scrubbed...
Name: Outlook-0kel4z03.png
Type: image/png
Size: 8478 bytes
Desc: not available
URL: <https://stat.ethz.ch/pipermail/r-sig-meta-analysis/attachments/20190322/d939bcec/attachment-0003.png>