[R-meta] How to interpret when the results from model-based standard errors and robust variance estimation do not corroborate with each other
Aki, See below. James
On Mon, Aug 13, 2018 at 10:57 PM Akifumi Yanagisawa <ayanagis at uwo.ca> wrote:
Dear Dr. Pustejovsky,
Yes, that is exactly the case; I am including both within-study comparisons and between-study comparisons. Now I understand that the different results between the model-based method and RVE comes from the fact that I am not distinguishing within- and between-study comparisons. As to my original model (i.e., three level meta-analysis with RVE), would it be appropriate to interpret that the model is indicating that the specific type of intervention is not significantly different from the original intervention when tested without distinguishing within- and between-study variances? Could I argue that when comparing the average of this specific type of intervention to the average of the original intervention, there seems to be little difference (or, the difference potentially cannot be detected due to the small sample size)?
JEP: Your parenthetical interpretation is the most accurate, I think: that differences between the average effects of the original intervention and variants of the intervention are not detectable. There might still be differences there, but the available data does not let you rule out the possibility of no differences.
Also, thank you very much for further suggesting on how to consider the within-study comparison. I would like to try your suggestions. The second option sounds especially great as I do not have to lose included studies. However, I am not quite following everything you said. I am sorry but I am not familiar with the term ?indicator variables?. Do you mean this as dummy coded variables for each treatment type?
JEP: Yes. indicator variable = dummy variable. I just didn't want you to get the impression that I was calling your study dumb. ;)
Would it be possible to centre dummy coded variable?
JEP: Yes. The dummy variables will then have values other than 0 or 1, but their interpretation is still the same---as a difference between the indicated category and a reference category.
Or, are you suggesting to compute the average effect size for each study and subtract this from each intervention type?
JEP: This might work too, but I'm not sure.
Thank you very much for your time and support. Best regards, Aki