Absolutely, so I will proceed with the vi given by metafor::escalc()
and then multiply the vi given by escalc() by a common DEF with a
common ICC across the studies that have nested structure and then
change that common ICC and inspect the change in coefficients (I'll
probably do this on a null model without moderators).
Once again, many thanks,
Fred
On Tue, Oct 19, 2021 at 10:15 AM James Pustejovsky <jepusto at gmail.com>
wrote:
Hi Fred,
Yes, it is definitely possible and sensible to combine the DEF
correction with RVE meta-analysis. However, I think it may be important to
use the initial DEF correction (and accompanying sensitivity analysis),
even if it is only based on ballpark assumptions. Without it, studies with
clustered samples will get an inordinately large amount of weight in the
meta-analysis, leading to imprecise estimates of average effects and
inflated estimates of between-study heterogeneity.
James
On Tue, Oct 19, 2021 at 10:07 AM Farzad Keyhan <f.keyhaniha at gmail.com>
Dear Reza and James,
Thank you so much for your, as always, valuable advice. Can we
possibly combine your two suggestions?
I mean can we both correct the initial, incorrect sampling variances
and then apply the clubSandwich package?
The reason is that finding the correct ICC is one issue, but then
assuming that ICC is going to be the same across the groups is another
issue which together make such a correction possibly a bit imprecise.
Thanks much,
Fred
On Tue, Oct 19, 2021 at 9:30 AM James Pustejovsky <jepusto at gmail.com>
Hi Fred,
This is a good question. I am in the same boat as Reza, as I don't
know of any methods work that examines the issue (though it seems like the
sort of thing that must be out there?). I'm going to respond under the
assumption that you don't have access to raw data and are just working with
reported summary statistics from a set of studies, some or all of which
ignored the clustering issue.
My first thought would be to use the same sort of cluster-correction
that is used for raw or standardized mean differences. The variance of the
LRR is based on a delta method approximation, and it can be expressed as
vi = se1^2 / m1^2 + se2^2 / m2^2,
where se1 = sd1 / sqrt(n1) and se2 = sd2 / sqrt(n2) are the standard
errors of the means in each group (calculated ignoring clustering, assuming
a sample of independent observations). The issue with clustered data is
that the usual standard errors are too small because of dependent
observations. The usual way to correct the issue is to inflate the standard
errors by the square root of the design effect, defined as
DEF = (n-lower - 1) * ICC + 1,
where n-lower is the number of lower-level observations per cluster
(or the average number of observations per cluster, if there is variation
in cluster size) and ICC is an intra-class correlation describing the
proportion of the total variation in the outcome that is between clusters.
If we assume that the ICC is the same in each group, then the design
effect hits both standard errors the same way, and so we can just use
vi = DEF * (se1^2 / m1^2 + se2^2 / m2^2),
In some areas of application, it can be hard to find empirical
information about ICCs, in which case you may just have to make some rough
assumptions in calculating the DEF then conduct sensitivity analysis for
varying values of ICC.
If my initial assumption is wrong and you do have access to raw data,
then the following recent article might be of help:
Hello All,
I recently came across a post
(
that discussed an issue that is relevant to my meta-analysis.
In short, if some studies have nested structures, and the effect size
of interest is log response ratio (LRR), is there a way to adjust the
sampling variances (below) before modeling the effect sizes?
vi = sd1i^2/(n1i*m1i^2) + sd2i^2/(n2i*m2i^2)
Thank you,
Fred