Skip to content
Prev 1010 / 5632 Next

[R-meta] How to interpret when the results from model-based standard errors and robust variance estimation do not corroborate with each other

Aki,

These are very interesting questions. To answer them more fully, could you
tell us a bit more about the analysis you are conducting? Specifically:
- How many studies are included?
- Are you fitting a meta-regression model with a single predictor or a
joint model with multiple predictors?
- What are the characteristics of the predictors where there is a
discrepancy between model-based and robust SEs (as in, do they vary within
study, between study, or both, and how much of each type of variation is
there)?
- What are the degrees of freedom from the RVE tests where there is a
discrepancy?

I will comment a bit at a general level about your questions:

(1) When cluster robust variance estimation potentially increases Type II
error rates. Compared to model-based inference, RVE necessarily increases
Type II error rates. But this is because there is a fundamental trade-off
between Type I and Type II errors, and so RVE has to increase Type II error
in order to control Type I error at a specified level. In general, it
doesn't really make sense to compare Type II errors of model-based and
robust inference unless it is under conditions where you can ensure that
*both* approaches control Type I error.

(2) How to interpret when the results from model-based standard errors and
robust variance estimation do not corroborate with each other. Generally,
discrepancies arise because (a) the working model is mis-specified in some
meaningful way or (b) the data do not contain sufficient information to get
a good estimate of the robust SE.

Regarding (a), model mis-specification could arise for several reasons,
including:
- that you've got an inaccurate assumption about the degree of correlation
between ES from the same study
- that you've got some sort of heteroskedasticity across levels of the
moderator (typical random effects meta-regression models make strong
assumptions about homoskedasticity of the random effects, although these
can be weakened, as we've discussed on the mailing list before), or
- that you've got between-study heterogeneity in the moderator of interest.
If you can suss out how the model is mis-specified and fit a more
appropriate model, its results will likely align with RVE.

Regarding (b), the degrees of freedom in RVE are diagnostic (and worth
reporting in write-ups, incidentally) in that they tell you how much
information is available to estimate the standard error for a given
moderating effect. Very roughly speaking, you can interpret them as one
less than the number of studies worth of information that go into
estimating a given standard error. If this is quite small, then the
implication is that there is not enough data available to support robust
inferences regarding that moderator. If you really trust the model you've
developed (e.g., you're willing to live with random effects
homoskedasticity assumptions), then go ahead and report model-based SEs and
CIs. Short of that, then the field needs to conduct more studies to
investigate that moderator.

James
On Sun, Aug 12, 2018 at 11:25 PM Akifumi Yanagisawa <ayanagis at uwo.ca> wrote: