Skip to content

[R-meta] Moderator analysis with missing values (Methods and interpretations)

6 messages · Tommy van Steen, Wolfgang Viechtbauer, Michael Dewey

#
Dear Wolfgang,

I have a follow-up question regarding the point of doing a side-by-side comparison of moderator analysis (testing moderators both individually and as part of a model that includes all moderators). Looking at the significant moderators, there are three types of outcomes in my meta-analysis:

Moderator A: Significant effect when tested both individually, and as part of larger model.
Moderator B: Significant effect when tested individually, but not when tested as part of larger model.
Moderator C: Significant effect when tested in a larger model, but not when tested individually.

Am I correct in saying that:
Moderator A has an effect, as the moderator is significant in both models.
Moderator B probably doesn?t have an effect, as the effect disappears when other factors are considered.
Moderator C has an effect, but only in interaction with other factors.

I am especially unsure about my interpretation of Moderator C.

Best wishes,
Tommy

  
  
#
Just to clarify Tommy, are you fitting all three models to the same set 
of studies or, as it seems from the exchange with Wolfgang below, are 
they being fitted to different subsets? If the latter then I think any 
conclusions comparing them must be very tentative.

Michael
On 11/09/2018 14:04, Tommy van Steen wrote:

  
    
  
#
Hi Tommy,

Some additional thoughts:

- The same questions arise in the context of primary research, so how would you answer these questions if you were running regression models with primary data?

- Michael raises an important point: When fitting larger models, it might happen that some studies/estimates are dropped due to listwise deletion. In that case, the comparison between results becomes a bit more problematic.

- Even for moderator A, the association might be confounded by other moderators that are not included in the larger model. So even moderator A might not really have an effect. But I would avoid wording such as 'moderator A has an effect' anyway, as this sounds a bit 'causal'. In any case, moderator A certainly leads to the simplest story, so this might make this finding most convincing to some.

- Power might be low to detect moderator B in the larger model. Or it might be that B was confounded with some 'real' moderators and fitting the larger model eliminated/reduced that confounding.

- For C, it could be that power is low when tested individually due to a large amount of residual heterogeneity. When fitting the larger model, residual heterogeneity might be reduced, making it easier to detect the relevance of C.

Of course, it is impossible to say for sure what is going on in any particular case.

Best,
Wolfgang

-----Original Message-----
From: Michael Dewey [mailto:lists at dewey.myzen.co.uk] 
Sent: Tuesday, 11 September, 2018 15:43
To: Tommy van Steen; Viechtbauer, Wolfgang (SP)
Cc: r-sig-meta-analysis at r-project.org
Subject: Re: [R-meta] Moderator analysis with missing values (Methods and interpretations)

Just to clarify Tommy, are you fitting all three models to the same set 
of studies or, as it seems from the exchange with Wolfgang below, are 
they being fitted to different subsets? If the latter then I think any 
conclusions comparing them must be very tentative.

Michael
On 11/09/2018 14:04, Tommy van Steen wrote:
#
Dear Tommy

Thinking about this a bit more, have you considered multiple imputation 
of the moderators? The main issue about MI is that is you have hardly 
any missing it is not worth it and if you have a lot then the results 
are very imprecise which reflects the lack of data of course.

Michael
On 11/09/2018 14:53, Viechtbauer, Wolfgang (SP) wrote:

  
    
  
#
Hi Wolfgang and Michael,

Thank you for your quick responses and help.

Regarding your (Michael) question about clarification, I have a total of 51 comparisons. The individual testing of moderators is done on all studies for which that moderator is available. The model with all moderators is indeed a subset of these 51 studies (k=27), as for each moderator, different studies are excluded because of missing values. 

When I first brought this issue up in this mailing list a few months ago, Wolfgang suggested a side-by-side comparison of the moderator tests, so that I present both the individual moderator tests, as well as the results when all moderators are included in the same model. I think this makes sense, but I want to make sure I interpret/present these findings correctly (which, it seems, I am not doing so far).
4 days later
#
Dear Tommy, see below for comments
On 11/09/2018 17:55, Tommy van Steen wrote:
You would include the outcome (whatever you are using as yi) and all the 
moderators. The you would run your favourite MI package (many people use 
mice) to generate a number of imputed sets, analyse each using the 
function in mice that does this and which i assume works for rma, then 
combine the resultant models. The mice package has extensive 
documentation and a number of on-line vignettes.

Michael