Skip to content

[R-meta] Background on large meta analysis with RCT and single-arm studies

3 messages · David Pedrosa, Wolfgang Viechtbauer

#
Hi Wolfgang,

wow, that's a marvellous answer which helps me quite a lot and gives me something to brood over! Especially the second part with your thoughts about the meta-regression with different study types.

One thing that I have been wondering all the time is whether it is valid to compare different forms of effect sizes. I have some studies that I preferred to discard since there were only "adjusted mean differences" with their post-test SD reported and the authors were reluctant to share their other data. According to my understanding it would not be reasonable to compare SMD with let's say SMCR, which I personally find more intuitive. Is that correct?

Best and thanks again for the quick reply and this whole package with all that goes with it.

Best,

David

David
Am 10. M?rz 2022, 10:55 +0100 schrieb Viechtbauer, Wolfgang (SP) <wolfgang.viechtbauer at maastrichtuniversity.nl>:

  
  
#
Can you describe the design of these studies? Are they also one-group pretest-posttest design? And what exactly do you mean by "adjusted mean differences"? Adjusted/computed how?

As for comparing SMD with SMCR -- that's actually what I said you could do, under the caveats mentioned about the various sources of invalidity that may impact one-group pretest-posttest designs.

Best,
Wolfgang

-----Original Message-----
From: R-sig-meta-analysis [mailto:r-sig-meta-analysis-bounces at r-project.org] On Behalf Of David Pedrosa
Sent: Thursday, 10 March, 2022 11:18
To: David Pedrosa; r-sig-meta-analysis at r-project.org
Subject: Re: [R-meta] Background on large meta analysis with RCT and single-arm studies

Hi Wolfgang,

wow, that's a marvellous answer which helps me quite a lot and gives me something to brood over! Especially the second part with your thoughts about the meta-regression with different study types.

One thing that I have been wondering all the time is whether it is valid to compare different forms of effect sizes. I have some studies that I preferred to discard since there were only "adjusted mean differences" with their post-test SD reported and the authors were reluctant to share their other data. According to my understanding it would not be reasonable to compare SMD with let's say SMCR, which I personally find more intuitive. Is that correct?

Best and thanks again for the quick reply and this whole package with all that goes with it.

Best,

David

David
Am 10. M?rz 2022, 10:55 +0100 schrieb Viechtbauer, Wolfgang (SP) <wolfgang.viechtbauer at maastrichtuniversity.nl>:
#
Well it?s two RCTs according to all standards with a commercial sponsor. But the methods are not very clear at this point. They claim that they adjusted the mean differences for baseline values in the framework of an ANCOVA but no one could provide any details on the raw mean difference values. Besides, there is no baseline disclosed which makes it difficult to work with the results in my opinion. So we have decided to discard that studies heavy heartedly.

We found quite a few studies in our search so we kind of like the idea of using the study type as moderator to see what the influence of this could be. But that will take us a while to figure out how to make that work, I guess.

David
Am 10. M?rz 2022, 13:59 +0100 schrieb Viechtbauer, Wolfgang (SP) <wolfgang.viechtbauer at maastrichtuniversity.nl>: