Skip to content

[R-meta] rma.mv for studies reporting composite of and/or individual subscales

14 messages · Timothy MacKenzie, Wolfgang Viechtbauer

#
Dear All,

In my meta-analysis, I've faced two issues.

First issue; each study can measure the same outcome using subscales
reported in the following ways:

(a) Some studies report only separate subscales,
(b) Some studies report only composite of some subscales,
(c) Some studies report both composite of and separate subscales.

Second issue; the same subscales don't quite occur across different
studies (indeed, the number of unique subscales is about the number of
studies).

To tackle the first issue, can I include only studies that report
separate subscales from (a) and (c) studies?

To tackle the second issue, can I only rely on the model below (data
structure is below)?

 rma.mv(es ~ 1, random = ~ 1 | study / obs, subset = subscale  == "subscale")

Thank you,
Tim M

My data looks like this (please view this in a plain text editor):

study subscale  reporting  obs
1        A      subscale   1
1        A      subscale   2
1        B      subscale   3
1        B      subscale   4
2        A&C    composite  5
3        G&H    composite  6
4        Z      subscale   7
4        T      subscale   8
4        Z&T    composite  9
#
Dear Tim,

Please see below for my responses.

Best,
Wolfgang
Sure you can. I don't think anybody here will come and stop you :)

I would tend to use (a) and (b) and for studies in group (c), I would either use an effect size computed based on the composite or the effect sizes computed based on the subscales (but not both). For effect sizes computed based on separate subscales in the same sample, the dependency between the effect sizes needs to be take into consideration. I would also code a moderator that indicates whether an effect size comes from a subscale or a composite measure.
I think you meant:

rma.mv(es ~ 1, random = ~ 1 | study / obs, subset = reporting == "subscale")

You could do that if you only want to include effect sizes computed based on subscales. That would throw out studies 2 and 3. Poor studies 2 and 3 :(
#
Thank you so much Wolfgang!

I would tend to use (a) and (b) and for studies in group (c), I would
either use an effect size computed based on the composite or the
effect sizes computed based on the subscales (but not both). I would
also code a moderator that indicates whether an effect size comes from
a subscale or a composite measure.
study subscale  reporting  obs  include
1        A      subscale   1             yes
1        A      subscale   2             yes
1        B      subscale   3             yes
1        B      subscale   4             yes
2        A&C    composite  5         yes
3        G&H    composite  6        yes
4        Z      subscale   7             yes
4        T      subscale   8             yes
4        Z&T    composite  9         no

Then, will my model be a subgroup model like the following?

rma.mv(es ~ reporting:X1, random = list(~1 | study, ~ obs |
interaction(study, reporting) ), struct = "DIAG",  subset = include ==
"yes")

If the above model is correct, I would assume it's not meaningful to
compare the fixed or random estimates for subscales with those for
composites?

Also, I assume I shouldn't use 'subscale' in the random part because
the same subscales don't occur much across the studies, correct?

Thank you very much,
Tim M

On Wed, Nov 24, 2021 at 7:55 AM Viechtbauer, Wolfgang (SP)
<wolfgang.viechtbauer at maastrichtuniversity.nl> wrote:
#
I may have misspecified your suggested subgroup-ish model in my
previous email, I think the model could have been:

rma.mv(es ~ reporting:X1, vi, random = list(~1| study, ~ reporting |
obs), struct = "DIAG", subset = include == "yes")

Regardless, one possible downside to the subgroup model in my data is
that it becomes a bit subjective how to treat studies that both
provide separate subscales and one composite subscales results. One
can use only their subscales and exclude their composite part or vice
versa. Thus, such subjectivity may have a bearing on the results of
the model estimates for each subgroup depending on how one treats (c)
studies referenced in my first email.

Thanks,
Tim M
On Wed, Nov 24, 2021 at 9:11 AM Timothy MacKenzie <fswfswt at gmail.com> wrote:
#
Not sure what X1 is, but yes, this could be a plausible model, allowing for different within-study variances for 'subscale' versus 'composite' estimates.
Sure, but excluding studies that only report a composite is also a subjective decision.
#
Not sure what X1 is, but yes, this could be a plausible model,
allowing for different within-study variances for 'subscale' versus
'composite' estimates.
Tim M

On Wed, Nov 24, 2021 at 12:07 PM Viechtbauer, Wolfgang (SP)
<wolfgang.viechtbauer at maastrichtuniversity.nl> wrote:
#
That's up to you or one could empirically examine if the association between X1 and es is different for the two types.

Best,
Wolfgang
#
So, you think there is no need to keep everything (i.e., fixed and
random) separate between studies that only contribute composite and
studies that only contribute separate subscales?

If there is no need, and both types of studies can be in one model,
then methodologically, wouldn't it be mixing apples (different
subscales) and oranges (different composites) in one model?

Thanks,
Tim M


On Wed, Nov 24, 2021 at 12:22 PM Viechtbauer, Wolfgang (SP)
<wolfgang.viechtbauer at maastrichtuniversity.nl> wrote:
#
Let me use a concrete example.

Say I have studies assessing the effectiveness of a treatment on depression. Some studies report means and SDs of the treated and control groups for overall/composite scales such as the BDI, HAM-D, CES-D, and so on. For such a study, I would compute its effect size based on whatever scale it used.

Studies may also have used multiple such scales. Then I would also compute multiple effect sizes, one per scale. Of course, I would then have to take the dependency of multiple effect sizes computed based on the same sample into consideration.

Say there are also some studies that, for some reason, have broken down such a scale into a few subscales, say BDI1 and BDI2, and they do not report means and SDs for the overall BDI scale, only for these subscales.

I would then compute effect sizes based on BDI1 and BDI2 and again, accounting for their dependency, include them in the same analysis as all of the above.

I personally see no major issues with this. BDI is a mixture of BDI1 and BDI2 anyway, so if I only have BDI, then this is what the effect size reflects. If I include effect sizes based on BDI1 and BDI2 in the analysis, then the model essentially mixes them together.

Scales may also measure multiple inherently different types of outcomes, such as the HADS, which has subscales for anxiety and depression. Not sure if it common practice to ever report an overall mean for both of these outcome types together. If both outcome types are of interest (and not just depression), then I can again include both effect sizes (for depression and anxiety) in the same analysis (again, with their covariance, blah blah blah). Plus I'll need a moderator to distinguish the two outcome types. Not sure what I would do with a study that only reports an overall HADS score for the two groups (if this is ever done). I might still include this in the analysis and code the outcome type moderator with a third category for 'mixture'.

If there are moderators that I want to examine, then I would be inclined to allow for separate relationships for different outcome types. I probably would not examine if the relationship differs for effect sizes that are based on subscales for the same outcome type versus effect sizes that are based on overall measures. Same goes with the random effects structure. But that would be my approach and one could of course separate things further.

Best,
Wolfgang
#
Appreciate it. Thank you very much. My response is below inline.

Say there are also some studies that, for some reason, have broken
down such a scale into a few subscales, say BDI1 and BDI2, and they do
not report means and SDs for the overall BDI scale, only for these
subscales.

BDI is a mixture of BDI1 and BDI2 anyway, so if I only have BDI, then
this is what the effect size reflects. If I include effect sizes based
on BDI1 and BDI2 in the analysis, then the model essentially mixes
them together.
**one the one hand**: BDI *in standard view* is a mixture of A, B and
C subscales and (1) some studies can mix and match them to create
their unique composites (AB;  AC;  ABC), (2) some studies report some
or all these subscales (A,B;  A,C;  A,B,C), and

**on the other hand**: BDI *in alternative view* is a mixture of E, F
and G subscales and (3) some studies can mix and match them to create
their unique composites (EF;  EG;  EFG), and (4) some studies report
some or all these subscales (E,F;  E,G;  E,F,G)?

This is what is reflected in my data structure below (as mentioned
earlier, the number of unique subscales is about the number of
studies).

Thanks, Tim M

study subscale  reporting  obs  include
1        A      subscale   1    yes
1        A      subscale   2    yes
1        B      subscale   3    yes
1        B      subscale   4    yes
2        A&C    composite  5    yes
3        G&F    composite  6    yes
4        E      subscale   7    yes
4        F      subscale   8    yes
4        E&F    composite  9    no

On Wed, Nov 24, 2021 at 12:59 PM Viechtbauer, Wolfgang (SP)
<wolfgang.viechtbauer at maastrichtuniversity.nl> wrote:
#
Sorry, I can't follow. What is 'standard view' and 'alternative view'? Those sound the same to me, except for different letters.

Best,
Wolfgang
#
By standard view, I mean the usual/standard way in which construct X
should be measured (i.e., using a scale whose subscales are A, B, C).

Under this standard view, when we look at the literature, we see:

1- Some studies mixed and matched the usual subscales to create their
unique composites (e.g., AB;  AC;  ABC)
2- Some studies report some or all these usual subscales and report
each separately (e.g., A,B;  A,C;  A,B,C)

By alternative view, I mean the researcher-constructed ways in which
construct X can be measured (i.e., using ANY scale whose subscales can
be ANYTHING appropriate to the researchers e.g., E,F,G ...).

Under this alternative view, when we look at the literature, we see:

(3) Some studies mixed and matched their own subscales to create their
unique composites (e.g., EF;  EG;  EFG),
(4) Some studies report some or all such subscales separately (e.g.,
E,F;  E,G;  E,F,G)

The result of such a trend is the data structure below. Therefore,
there are three gray areas for me:

1) Dealing with composite vs separate subscales (resolved:-)
2) Dealing with whether effects have been obtained under standard or
alternative view (maybe this should be a moderator?)
3) How should this data be subgrouped i.e., only by composite vs.
subscales or by standard vs. alternative view?

Thanks, Tim M

study subscale  reporting  obs  include
1        A      subscale   1    yes
1        A      subscale   2    yes
1        B      subscale   3    yes
1        B      subscale   4    yes
2        A&C    composite  5    yes
3        G&F    composite  6    yes
4        E      subscale   7    yes
4        F      subscale   8    yes
4        E&F    composite  9    no

On Wed, Nov 24, 2021 at 2:29 PM Viechtbauer, Wolfgang (SP)
<wolfgang.viechtbauer at maastrichtuniversity.nl> wrote:
#
Note:
by unique composites (e.g., AB;  AC;  ABC): I mean a study made its
composite out of A&B; another study made its composite out of A&C, and
another study made its composite out of A&B&C etc

by reporting subscales separately (e.g., A,B;  A,C;  A,B,C): I mean a
study separately reported A and B; another study separately reported A
and C, and another study separately reported A and B and C
On Wed, Nov 24, 2021 at 3:10 PM Timothy MacKenzie <fswfswt at gmail.com> wrote:
#
My apologies, but I am still struggling to understand the 'standard' versus 'alternative' view distinction. Maybe somebody else who understands this better can help further.

Best,
Wolfgang