Skip to content
Prev 5094 / 5636 Next

[R-meta] Standard error of heterogeneity with rma.uni

Sorry about "blindly copy-pasting" the code
"control=list(tau2.min=-min(vi))". I thought the vi was a variable in the
specified dataset. I didn't know it was an external data vector. I mean,
other parameters in rma.uni come from the data. I'm new to R and metafor,
remember, so I think in SAS terms.

I also didn't know how you had generated the SE for heterogeneity. I
implemented "scoring=100" in Proc Mixed, as per your suggestion, and yes, it
generates the same SE as yours. It does make the coverage of the 90%
confidence interval for the heterogeneity variance and for the fixed effects
a little suboptimal (~88%) under some small-sample circumstances, but a big
plus is that only a handful of 5000 simulations failed to converge, whereas
up to 500 of the 5000 simulations failed without scoring=100. So thank you,
the suboptimal coverage is worth it. Those other methods of estimation you
mentioned don't appear to be available in Proc Mixed, but I am happy with
the default REML. I am not skilled enough yet with R to run my simulations
therein (I have to learn "list" coding?), so the only way I can compare Proc
Mixed's confidence limits with those for the same dataset in metafor is to
specify type="PL" in the confint function. There was close agreement with
the CLs for the fixed effects, but big discrpancy for the upper CL for the
heterogeneity.

Using Proc Mixed, with 90% CLs:
CovParm	Estimate	StdErr	Lower	Upper
StudyID		0.5427		0.5316	-0.3317	1.4172

Using rma.unit:
tau^2 (estimated amount of residual heterogeneity): 0.5425 (SE = 0.5315)
Then confint(..., level=90,...):
              estimate ci.lb   ci.ub 
tau^2    0.5425    NA     2.3542 

I don't know what to make of that. Proc Mixed's upper CL comes from assuming
a normal distribution for the hetero variance. Remember, this way of doing
the CLs gives pretty darn good coverage. If your upper CL is consistency
greater than Proc Mixed's, the coverage will be way too high, won't it? I
guess I'll see what happens when I get 5000 simulations running in metafor.

Re: selmodel() does not work with negative variance for heterogeneity... So
the idea is not to allow negative point estimates of variance in rma.uni. A
substantial proportion of point estimates will want to be (and arguably
should be allowed to be) negative, when real heterogeneity is zero or
slightly positive, when sample sizes and numbers of studies are small, and
especially when there is substantial publication bias. But that's OK, you
have to set them to zero. Again, I'll have to see how well it works, when I
figure out how to do it.

Re: the discrepancy with the SE for the males... I couldn't reproduce that,
and I really did check it at the time. Let's say a cosmic ray flipped a bit!
I ran half a dozen more datasets in SAS and metafor, and the SEs of the
fixed effects all agreed.

Will

-----Original Message-----
From: Viechtbauer, Wolfgang (NP)
<wolfgang.viechtbauer at maastrichtuniversity.nl> 
Sent: Monday, March 4, 2024 11:07 PM
To: R Special Interest Group for Meta-Analysis
<r-sig-meta-analysis at r-project.org>
Cc: Will Hopkins <willthekiwi at gmail.com>
Subject: RE: [R-meta] Standard error of heterogeneity with rma.uni

Dear Will,

Please see my responses below.

Best,
Wolfgang
<mailto:r-sig-meta-analysis-bounces at r-project.org>
several questions I hope you or someone can answer, Wolf.
females were significant.
0.3810)
SAS uses by default the inverse of the Hessian (i.e., the observed Fisher
information matrix) to calculate the SE of tau^2, while rma() uses the
expected Fisher information matrix. If you use 'scoring=100' (should be
sufficiently large), then SAS will also use the latter and the SEs are the
same. Or one can use:

simraw$id <- 1:nrow(simraw)
res2 <- rma.mv(Ydelta, YdeltaSEsq, random = ~ 1 | id, cvvc=TRUE)
round(sqrt(res2$vvc), 4)

to get the SE based on the observed Fisher information matrix.

Which is to be preferred (the observed or expected information) is an open
issue. Neither is particularly useful though in the present context, because
better CIs for tau^2 can be constructed (e.g., using the Q-profile method,
the generalized Q-statistic method, or the profile likelihood method) that
do not make use of any SE of tau^2.
No.
No.
Blindly copy-pasting code without the necessary adjustments to variable
names isn't going to work.
You can use control=list(tau2.min=-min(simrawnv$YdeltaSEsq)). The tau2.min
argument does not make use of non-standard evaluation.
This is not a bug. If we allow tau^2 to go below -min(vi), then the marginal
variance becomes negative. Neither the all holy SAS(r) nor metafor can
magically make this work.
This seems quite peculiar. Please provide a fully reproducible example
replicating this issue.
estimates?

No, because it is the only sensible thing that can be done.
That 'Wolf' guy seems to know what he is talking about here.
What to do?  Bootstrap in metafor?

If you think this is going to fix whatever problem you perceive, then here
is some code to get you started:

https://www.metafor-project.org/doku.php/tips:bootstrapping_with_ma
As mentioned above, selmodel() does not allow for negative tau^2 values.