Hi Dave,
Some thoughts from me below, but I am sure Wolfgang and others can chime
in with better input.
Thanks to you both for the helpful information, and sorry for the delay in
responding. To remind everyone, I wrote a couple weeks ago seeking advice
on how to estimate the mean magnitude of effect sizes (i.e. the absolute
value of effects without considering direction), rather than estimating
true means as most models are intended for.
Daniel suggested a bayesian approach that The analyze-then-transform
method proposed by Morrisey et al 2016. This approach does seem to be just
what I need for estimating mean effect magnitudes without generating upward
biases.
A few follow up questions:
1. I am using multi-level mixed models to estimate mean effect sizes
(using rma.mv in metafor). Any reason why the function for the mean of
a folded distribution (the mu.fnorm function in the Rejoinder paper) could
not be applied to these more complex models?
No, this won't be a problem. One can apply the folded normal to estimate
the mean magnitude of effects for various levels of a categorical variable
after accounting for study, phylogeny and species (etc) in a multi-level
context (I did this recently, see Noble et al. 2018. Biological Reviews,
93, 72?97; code for applying folded normal etc. is available for the paper,
if that is at all helpful). I think the thing to be careful about is the
estimation of variance for each level of the categorical moderator. The
folded normal will be sensitive to total variance, and so, assuming
homogeneous variance in each level of a categorical moderator may not be a
realistic assumption and will likely lead to some odd estimates at times.
You can: 1) explicitly model heterogeneous variance in each level of the
categorical moderator or 2) simply do a subset analysis to model each level
of a categorical moderator seperately (i.e, seperate models) and then apply
the folded normal to the overall mean estimate of the model with the
subsetted data in each group / level.
2. I am also testing the influence of various moderators on effect sizes
using likelihood ratio tests (seeing whether dropping certain factors
reduces goodness of fit). I can not think of how the analyze-then-transform
method could be applicable here. Have you ever done these types of
analyses with magnitudes?
I'm not entirely clear on the question here. Do you mean categorical and
continuous moderators? You would be correct, applying the folded normal for
continuous moderators is pretty tricky at times. Shinichi and I are trying
to sort out exactly what this means at the moment ? it's kind of a mind
bender thinking about this problem (at least for me). Presently, as far as
I understand it, you can only really do this with different levels of
categorical predictors. Although I maybe wrong - so others feel free to
chime in to correct me!
3. do you have recommendations for estimating confidence intervals about
the mean magnitudes?
This was why I suggested to use a Bayesian approach as it becomes very
easy to estimate credible intervals on these estimates as you can apply the
folded normal function to the entire posterior distibution. Although, this
can probably also be done with a bootstrapping method using metafor.
Wolfgang will probably have some good suggestions here on what would work
with metafor.
On Mon, May 21, 2018 at 10:39 PM, Daniel Noble <daniel.wa.noble at gmail.com
Hi Dave and Wolfgang,
If you don't mind going Bayesian, you can try the "analyse and
transform" option. This is done by estimating the overall mean estimate and
applying that to the folded normal. Check out Mike's two papers.
Morrisey,M.B.(2016). Meta-analysis of magnitudes, differences and
variation in evolutionary parameters. Journal of Evolutionary Biology 29,
1882?1904.
Morrisey,M.B.(2016). Rejoinder: further considerations for meta-analysis
of transformed quantities such as absolute values. Journal of Evolutionary
Biology 29, 1922?1931.
The second one has some R code that can help.
Cheers,
Dan
???
Dr. Daniel Noble | ARC DECRA Fellow
Level 5 West, Biological Sciences Building (E26)
Ecology & Evolution Research Centre (E&ERC)
School of Biological, Earth and Environmental Sciences (BEES)
*The University of New South Wales*
Sydney, NSW 2052
AUSTRALIA
T : +61 430 290 053
E : daniel.noble at unsw.edu.au <daniel.noble at mq.edu.au>
W: www.nobledan.com
Github: https://github.com/daniel1noble
On Tue, May 22, 2018 at 7:03 AM, Viechtbauer, Wolfgang (SP) <
wolfgang.viechtbauer at maastrichtuniversity.nl> wrote:
Hi Dave,
You cannot just take absolute values and proceed with standard methods.
As you noted, by taking absolute values, you end up with folded normal
distributions. My approach would be to use ML estimation where the absolute
values have folded normal distributions and then compute a profile
likelihood confidence interval for the mean parameter, since I suspect a
Wald-type CI would perform poorly.
Best,
Wolfgang
-----Original Message-----
From: R-sig-meta-analysis [mailto:r-sig-meta-analysis-bo
unces at r-project.org] On Behalf Of Dave Daversa
Sent: Monday, 21 May, 2018 13:47
To: r-sig-meta-analysis at r-project.org
Subject: [R-meta] effect size estimates regardless of direction
ATTACHMENT(S) REMOVED: forest.plot.example.pdf |
dummy.forest.plot.code.R
Hi all,
My question regards how to estimate overall magnitudes of effect sizes
from compiled studies regardless of the direction. I have attached a
figure to illustrate, which I developed using made-up data and the attached
code.
In the figure five studies have significantly positive effect sizes,
while 5 have significantly negative effect sizes. Each have equal
variances. So, the overall estimated mean effect size from a random
effects model is 0. However, what if we simply want to estimate the mean
effect size regardless of direction (i.e. the average magnitude of
effects)? In this example, that value would be 9.58 (CI: 6.48, 12.67),
correct?
I have heard that taking absolute values of effect sizes generates an
upward bias in estimates of the standardized mean difference. Also, this
would create a folded normal distribution, which would violate assumptions
of the model and would require an alternative method of estimating
confidence intervals. What would be your approach to setting up a model
for answering the question of how much the overall magnitude of responses
is?
I suspect this question has come up in this email group in the past.
If so, my apologies for the redundancy, and please send me any reference
that may be helpful.
Dave Daversa