Hi R legends! My name is Tzlil and I'm a PhD candidate in Sport Science - Human performance science and sports analytics I'm currently working on a multilevel meta-analysis using the metafor package. My first question is around the methods used to assign weights within rma.mv models. I'd like to know if there is a conventional or 'most conservative' approach to continue with. Since I haven't found a consistent methodology within the multilevel meta-analyses papers I read, I originally applied a weight which pertains to variance (vi) and number of effect sizes from the same study. I found this method in a lecture by Joshua R. Polanin https://www.youtube.com/watch?v=rJjeRRf23L8&t=1719s from 28:00. W = 1/vi, then divided by the number of ES for a study for example, a study with vi = 0.0402 and 2 different ES will weight as follow; 1/0.0402 = 24.88, then 24.88/2 = 12.44 (finally, converting into percentages based on the overall weights in the analysis) After I've read some of the great posts provided in last threads here such as; http://www.metafor-project.org/doku.php/tips:weights_in_rma.mv_models and https://www.jepusto.com/weighting-in-multivariate-meta-analysis/ I wonder if it is not correct and I need to modify the way I use weights in my model.. For some reason, I tried to imitate the approach used in the first link above. However, for some reason I get an error every time I tried to specify weights(res, type="rowsum") *Error in match.arg(type, c("diagonal", "matrix")) : 'arg' should be one of ?diagonal?, ?matrix?* My second question is related to the way I meta-analyse a specific ES. My meta-analysis involves the reliability and convergent validity of heart rate during a specific task, which is measured in relative values (i.e. percentages). Therefore, my meta-analysis includes four different ESs parameters (mean difference; MD, interclass correlation; ICC, standard error of measurement; SEM, and correlation coefficient; r). I wonder how I need to use SEM before starting the analysis. I've seen some papers which squared and log transformed the SEM before performing a meta-analysis, while others converted the SEM into CV%. Due to the original scale of our ES (which is already in percentages) I'd like to perform the analysis without converting it into CV% values. Should I use the SEM as the reported values? only log transformed it? Further, is there a straightforward way in metafor to specify the analysis with Chi-square values (as "ZCOR" in correlations)? Thanks in advance! Kind regards, Tzlil Shushan | Sport Scientist, Physical Preparation Coach BEd Physical Education and Exercise Science MSc Exercise Science - High Performance Sports: Strength & Conditioning, CSCS PhD Candidate Human Performance Science & Sports Analytics
[R-meta] Performing a multilevel meta-analysis
11 messages · Wolfgang Viechtbauer, Tzlil Shushan, Fernando Klitzke Borszcz
1 day later
Dear Tzlil, Unless you have good reasons to do so, do not use custom weights. rma.mv() uses weights and the default ones are usually fine. weights(res, type="rowsum") will only (currently) work in the 'devel' version of metafor, which you can install as described here: https://wviechtb.github.io/metafor/#installation I can't really comment on the second question, because answering this would require knowing all details of what is being computed/reported. As for the last question ("is there a straightforward way in metafor to specify the analysis with Chi-square values"): No, chi-square values are test statistics, not an effect size / outcome measure, so they cannot be used for a meta-analysis (at least not with metafor). Best, Wolfgang
-----Original Message----- From: R-sig-meta-analysis [mailto:r-sig-meta-analysis-bounces at r-project.org] On Behalf Of Tzlil Shushan Sent: Wednesday, 05 August, 2020 5:45 To: r-sig-meta-analysis at r-project.org Subject: [R-meta] Performing a multilevel meta-analysis Hi R legends! My name is Tzlil and I'm a PhD candidate in Sport Science - Human performance science and sports analytics I'm currently working on a multilevel meta-analysis using the metafor package. My first question is around the methods used to assign weights within rma.mv models. I'd like to know if there is a conventional or 'most conservative' approach to continue with. Since I haven't found a consistent methodology within the multilevel meta-analyses papers I read, I originally applied a weight which pertains to variance (vi) and number of effect sizes from the same study. I found this method in a lecture by Joshua R. Polanin https://www.youtube.com/watch?v=rJjeRRf23L8&t=1719s from 28:00. W = 1/vi, then divided by the number of ES for a study for example, a study with vi = 0.0402 and 2 different ES will weight as follow; 1/0.0402 = 24.88, then 24.88/2 = 12.44 (finally, converting into percentages based on the overall weights in the analysis) After I've read some of the great posts provided in last threads here such as; http://www.metafor-project.org/doku.php/tips:weights_in_rma.mv_models and https://www.jepusto.com/weighting-in-multivariate-meta-analysis/ I wonder if it is not correct and I need to modify the way I use weights in my model.. For some reason, I tried to imitate the approach used in the first link above. However, for some reason I get an error every time I tried to specify weights(res, type="rowsum") *Error in match.arg(type, c("diagonal", "matrix")) : 'arg' should be one of ?diagonal?, ?matrix?* My second question is related to the way I meta-analyse a specific ES. My meta-analysis involves the reliability and convergent validity of heart rate during a specific task, which is measured in relative values (i.e. percentages). Therefore, my meta-analysis includes four different ESs parameters (mean difference; MD, interclass correlation; ICC, standard error of measurement; SEM, and correlation coefficient; r). I wonder how I need to use SEM before starting the analysis. I've seen some papers which squared and log transformed the SEM before performing a meta-analysis, while others converted the SEM into CV%. Due to the original scale of our ES (which is already in percentages) I'd like to perform the analysis without converting it into CV% values. Should I use the SEM as the reported values? only log transformed it? Further, is there a straightforward way in metafor to specify the analysis with Chi-square values (as "ZCOR" in correlations)? Thanks in advance! Kind regards, Tzlil Shushan | Sport Scientist, Physical Preparation Coach BEd Physical Education and Exercise Science MSc Exercise Science - High Performance Sports: Strength & Conditioning, CSCS PhD Candidate Human Performance Science & Sports Analytics
Dear Wolfgang, Thanks for your quick reply and sorry in advance for the long ?assay?.. It is probably be better if I give an overview on my analysis. Generally, I employ meta-analysis on the reliability and validity of heart rate response during sub-maximal assessments. We were able to compute three different effect sizes reflects reliability; mean differences, ICC and standard error of measurement of test-retest design, while for validity, we computer correlation coefficient between heart rate values and maximal aerobic fitness. Since both measurement properties (i.e reliability/validity) of heart rate can be analysed from different intensities during the assessment (for example, 70, 80 and 90% from heart rate maximum), different modalities of tests (e.g running, cycling), and multiple time points across the year (e.g. before season, in-season), one sample can have more than one effect size. I decided to employ three level meta-analysis, while level two and three pertaining to within and between samples variance, respectively. Then, include moderators effect within and between samples). Regarding the weights, the only reason I wonder if I need to adjust them is because the wide range of effect sizes per sample (1-4 per sample) and thought to use the approach you discussed in your recent post here. http://www.metafor-project.org/doku.php/tips:weights_in_rma.mv_models However, as I understand the default W in rma.mv will work quite well? With regards to the above (i.e multiple effect sizes for samples), I consider to add robust cluster test to get more accurate standard error values. As I understand, it may be a good option to control for the natural (unknown) correlations between effect sizes from the same sample. First, do you think it is necessary? If so, would you apply cluster test just to the overall model or for additional models including moderators. Second, Is it reasonable to report the results obtained from the multilevel and cluster analyses in the paper? Of note, my dataset isn?t large and includes between 15-20 samples (clusters) while around 50-60% have multiple effect sizes. With regards to the second question in the original email, we computer the standard error of measurement (usually attained from pooled SD of test-retest multiply the square root of 1-icc). Practically, these effect sizes are sd values. I haven?t seen enough meta-analysis studies using standard error of measurement as effect size and I speculate if you can suggest me what would be a decent approach for this? Cheers, On Thu, 6 Aug 2020 at 22:30, Viechtbauer, Wolfgang (SP) <
wolfgang.viechtbauer at maastrichtuniversity.nl> wrote:
Dear Tzlil, Unless you have good reasons to do so, do not use custom weights. rma.mv() uses weights and the default ones are usually fine. weights(res, type="rowsum") will only (currently) work in the 'devel' version of metafor, which you can install as described here: https://wviechtb.github.io/metafor/#installation I can't really comment on the second question, because answering this would require knowing all details of what is being computed/reported. As for the last question ("is there a straightforward way in metafor to specify the analysis with Chi-square values"): No, chi-square values are test statistics, not an effect size / outcome measure, so they cannot be used for a meta-analysis (at least not with metafor). Best, Wolfgang
-----Original Message----- From: R-sig-meta-analysis [mailto:
r-sig-meta-analysis-bounces at r-project.org]
On Behalf Of Tzlil Shushan Sent: Wednesday, 05 August, 2020 5:45 To: r-sig-meta-analysis at r-project.org Subject: [R-meta] Performing a multilevel meta-analysis Hi R legends! My name is Tzlil and I'm a PhD candidate in Sport Science - Human performance science and sports analytics I'm currently working on a multilevel meta-analysis using the metafor package. My first question is around the methods used to assign weights within
rma.mv
models. I'd like to know if there is a conventional or 'most conservative'
approach
to continue with. Since I haven't found a consistent methodology within
the
multilevel meta-analyses papers I read, I originally applied a weight
which
pertains to variance (vi) and number of effect sizes from the same study.
I
found this method in a lecture by Joshua R. Polanin https://www.youtube.com/watch?v=rJjeRRf23L8&t=1719s from 28:00. W = 1/vi, then divided by the number of ES for a study for example, a study with vi = 0.0402 and 2 different ES will weight as follow; 1/0.0402 = 24.88, then 24.88/2 = 12.44 (finally, converting into percentages based on the overall weights in the analysis) After I've read some of the great posts provided in last threads here such as; http://www.metafor-project.org/doku.php/tips:weights_in_rma.mv_models and https://www.jepusto.com/weighting-in-multivariate-meta-analysis/ I wonder if it is not correct and I need to modify the way I use weights
in
my model.. For some reason, I tried to imitate the approach used in the first link above. However, for some reason I get an error every time I tried to specify weights(res, type="rowsum") *Error in match.arg(type,
c("diagonal",
"matrix")) : 'arg' should be one of ?diagonal?, ?matrix?* My second question is related to the way I meta-analyse a specific ES. My meta-analysis involves the reliability and convergent validity of heart rate during a specific task, which is measured in relative values (i.e. percentages). Therefore, my meta-analysis includes four different ESs parameters (mean difference; MD, interclass correlation; ICC, standard error of measurement; SEM, and correlation coefficient; r). I wonder how I need to use SEM before starting the analysis. I've seen
some
papers which squared and log transformed the SEM before performing a meta-analysis, while others converted the SEM into CV%. Due to the
original
scale of our ES (which is already in percentages) I'd like to perform the analysis without converting it into CV% values. Should I use the SEM as
the
reported values? only log transformed it? Further, is there a straightforward way in metafor to specify the analysis with Chi-square values (as "ZCOR" in correlations)? Thanks in advance! Kind regards, Tzlil Shushan | Sport Scientist, Physical Preparation Coach BEd Physical Education and Exercise Science MSc Exercise Science - High Performance Sports: Strength & Conditioning, CSCS PhD Candidate Human Performance Science & Sports Analytics
Tzlil Shushan B.Ed. Physical Education and Exercise Science M.Sc. High Performance Sports: Strength & Conditioning, CSCS [[alternative HTML version deleted]]
5 days later
Dear Tzlil, Your questions are a bit too general for me to give meaningful answers. Also, some of your questions (with regard to modeling dependent effects and using cluster robust methods) have been extensively discussed on this mailing list, so no need to repeat all of that. But yes, if you use cluster robust inference methods, I would use them not just for the 'overall model' but also for models including moderators. Best, Wolfgang
-----Original Message----- From: Tzlil Shushan [mailto:tzlil21092 at gmail.com] Sent: Thursday, 06 August, 2020 16:05 To: Viechtbauer, Wolfgang (SP) Cc: r-sig-meta-analysis at r-project.org Subject: Re: [R-meta] Performing a multilevel meta-analysis Dear?Wolfgang, Thanks for your quick reply and sorry in advance for the long ?assay?.. It is probably be better if I give an overview on my analysis. Generally, I employ meta-analysis on the reliability and validity of heart rate response during sub-maximal assessments. We were able to compute three different effect sizes reflects reliability; mean differences, ICC and standard error of measurement of test-retest design, while for validity, we computer correlation coefficient between heart rate values and maximal aerobic fitness. Since both measurement properties (i.e reliability/validity) of heart rate can be analysed from different intensities during the assessment (for example, 70, 80 and 90% from heart rate maximum), different modalities of tests (e.g running, cycling), and multiple time points across the year (e.g. before season, in-season), one sample can have more than one effect size. I decided to employ three level meta-analysis, while level two and three pertaining to within and between samples variance, respectively. Then, include moderators effect within and between samples). Regarding the weights, the only reason I wonder if I need to adjust them is because the wide range of effect sizes per sample (1-4 per sample) and thought to use the approach you discussed in your recent post here. http://www.metafor-project.org/doku.php/tips:weights_in_rma.mv_models However, as I understand the default W in rma.mv will work quite well? With regards to the above (i.e multiple effect sizes for samples), I consider to add robust cluster test to get more accurate standard error values. As I understand, it may be a good option to control for the natural (unknown) correlations between effect sizes from the same sample. First, do you think it is necessary? If so, would you apply cluster test just to the overall model or for additional models including moderators. Second, Is it reasonable to report the results obtained from the multilevel and cluster analyses in the paper? Of note, my dataset isn?t large and includes between 15-20 samples (clusters) while around 50-60% have multiple effect sizes. With regards to the second question in the original email, we computer the standard error of measurement (usually attained from pooled SD of test- retest multiply the square root of 1-icc). Practically, these effect sizes are sd values. I haven?t seen enough meta-analysis studies using standard error of measurement as effect size and I speculate if you can suggest me what would be a decent approach for this? Cheers, On Thu, 6 Aug 2020 at 22:30, Viechtbauer, Wolfgang (SP) <wolfgang.viechtbauer at maastrichtuniversity.nl> wrote: Dear Tzlil, Unless you have good reasons to do so, do not use custom weights. rma.mv() uses weights and the default ones are usually fine. weights(res, type="rowsum") will only (currently) work in the 'devel' version of metafor, which you can install as described here: https://wviechtb.github.io/metafor/#installation I can't really comment on the second question, because answering this would require knowing all details of what is being computed/reported. As for the last question ("is there a straightforward way in metafor to specify the analysis with Chi-square values"): No, chi-square values are test statistics, not an effect size / outcome measure, so they cannot be used for a meta-analysis (at least not with metafor). Best, Wolfgang
-----Original Message----- From: R-sig-meta-analysis [mailto:r-sig-meta-analysis-bounces at r-
project.org]
On Behalf Of Tzlil Shushan Sent: Wednesday, 05 August, 2020 5:45 To: r-sig-meta-analysis at r-project.org Subject: [R-meta] Performing a multilevel meta-analysis Hi R legends! My name is Tzlil and I'm a PhD candidate in Sport Science - Human performance science and sports analytics I'm currently working on a? multilevel meta-analysis using the metafor package. My first question is around the methods used to assign weights within
rma.mv
models. I'd like to know if there is a conventional or 'most conservative' approach to continue with. Since I haven't found a consistent methodology within the multilevel meta-analyses papers I read, I originally applied a weight which pertains to variance (vi) and number of effect sizes from the same study. I found this method in a lecture by Joshua R. Polanin https://www.youtube.com/watch?v=rJjeRRf23L8&t=1719s from 28:00. W = 1/vi, then divided by the number of ES for a study for example, a study with vi = 0.0402 and 2 different ES will weight as follow; 1/0.0402 = 24.88, then 24.88/2 = 12.44 (finally, converting into percentages based on the overall weights in the analysis) After I've read some of the great posts provided in last threads here such as; http://www.metafor-project.org/doku.php/tips:weights_in_rma.mv_models and https://www.jepusto.com/weighting-in-multivariate-meta-analysis/ I wonder if it is not correct and I need to modify the way I use weights in my model.. For some reason, I tried to imitate the approach used in the first link above. However, for some reason I get an error every time I tried to specify weights(res, type="rowsum") *Error in match.arg(type, c("diagonal", "matrix")) : 'arg' should be one of ?diagonal?, ?matrix?* My second question is related to the way I meta-analyse a specific ES. My meta-analysis involves the reliability and convergent validity of heart rate during a specific task, which is measured in relative values (i.e. percentages). Therefore, my meta-analysis includes four different ESs parameters (mean difference; MD, interclass correlation; ICC, standard error of measurement; SEM, and correlation coefficient; r). I wonder how I need to use SEM before starting the analysis. I've seen some papers which squared and log transformed the SEM before performing a meta-analysis, while others converted the SEM into CV%. Due to the original scale of our ES (which is already in percentages) I'd like to perform the analysis without converting it into CV% values. Should I use the SEM as the reported values? only log transformed it? Further, is there a straightforward way? in metafor to specify the analysis with Chi-square values (as "ZCOR" in correlations)? Thanks in advance! Kind regards, Tzlil Shushan | Sport Scientist, Physical Preparation Coach BEd Physical Education and Exercise Science MSc Exercise Science - High Performance Sports: Strength & Conditioning, CSCS PhD Candidate Human Performance Science & Sports Analytics
3 days later
Dear Wolfgang, First, thank you so much for the quick response and the time you dedicate to my questions. And yes, I looked on the mailing list and have seen some meaningful discussions around some of my questions. Based on the readings, I assume that an extension of my multilevel model with robust variance inference is a good idea. However, I still would like to give a chance to the second question I had and I'll try to be more specific this time. I hope you (or others in this group) can help me with that. One of the effect sizes in the meta-analysis is the 'standard error of measurement' (SEM) of heart rate from a test-retest (reliability) assessment. Simply described, this assessment was performed twice on a matched group and I'm interested in the variability of this measure. This effect size is derived from the pooled standard deviation (mean test-retest SD) and intraclass correlation (ICC) of a test-retest. For example, if the mean ? SD of test one is 80.0 ? 4.0 and test two is 80.5 ? 4.8, and intraclass correlation is 0.95, the SEM will be 4.4*?(1-0.95)= *0.98*. Practically, this effect size is a form of SD value. I'm aware of the fact that the first thing that I probably should do if I want to use metafor package is to convert these values into coefficient of variation (CV%). However, because the outcome measure (heart rate) is already calculated in percentages values (% of heart rate maximum), we'd like to meta-analyse the SEM in the original raw values. Further, using this effect size is important for having practical implications in the paper. I've seen some discussion in the mailing list https://stat.ethz.ch/pipermail/r-sig-meta-analysis/2018-May/000828.html?fbclid=IwAR2dSpruCCqlk631VKBAflkibrD8Gke-9sSGgMHxG4TtY_ocZX1IsZCPlI0 on CV% from matched groups with *escalc(measure="CVRC", **y = logCV_1 - logCV_2). *However, I'd like to know if there is a way to fit the escalc equation to the SEM values (which is only one value from each paired test)? or alternatively, if there are other approaches I should consider? Kind regards, Tzlil Shushan | Sport Scientist, Physical Preparation Coach BEd Physical Education and Exercise Science MSc Exercise Science - High Performance Sports: Strength & Conditioning, CSCS PhD Candidate Human Performance Science & Sports Analytics ??????? ??? ??, 12 ????? 2020 ?-4:46 ??? ?Viechtbauer, Wolfgang (SP)?? <? wolfgang.viechtbauer at maastrichtuniversity.nl??>:?
Dear Tzlil, Your questions are a bit too general for me to give meaningful answers. Also, some of your questions (with regard to modeling dependent effects and using cluster robust methods) have been extensively discussed on this mailing list, so no need to repeat all of that. But yes, if you use cluster robust inference methods, I would use them not just for the 'overall model' but also for models including moderators. Best, Wolfgang
-----Original Message----- From: Tzlil Shushan [mailto:tzlil21092 at gmail.com] Sent: Thursday, 06 August, 2020 16:05 To: Viechtbauer, Wolfgang (SP) Cc: r-sig-meta-analysis at r-project.org Subject: Re: [R-meta] Performing a multilevel meta-analysis Dear Wolfgang, Thanks for your quick reply and sorry in advance for the long ?assay?.. It is probably be better if I give an overview on my analysis. Generally,
I
employ meta-analysis on the reliability and validity of heart rate
response
during sub-maximal assessments. We were able to compute three different effect sizes reflects reliability; mean differences, ICC and standard
error
of measurement of test-retest design, while for validity, we computer correlation coefficient between heart rate values and maximal aerobic fitness. Since both measurement properties (i.e reliability/validity) of heart rate can be analysed from different intensities during the assessment (for example, 70, 80 and 90% from heart rate maximum), different modalities of tests (e.g running, cycling), and multiple time points across the year
(e.g.
before season, in-season), one sample can have more than one effect size. I decided to employ three level meta-analysis, while level two and three pertaining to within and between samples variance, respectively. Then, include moderators effect within and between samples). Regarding the weights, the only reason I wonder if I need to adjust them
is
because the wide range of effect sizes per sample (1-4 per sample) and thought to use the approach you discussed in your recent post here. http://www.metafor-project.org/doku.php/tips:weights_in_rma.mv_models However, as I understand the default W in rma.mv will work quite well? With regards to the above (i.e multiple effect sizes for samples), I consider to add robust cluster test to get more accurate standard error values. As I understand, it may be a good option to control for the
natural
(unknown) correlations between effect sizes from the same sample. First, do you think it is necessary? If so, would you apply cluster test just to the overall model or for additional models including moderators. Second, Is it reasonable to report the results obtained from the
multilevel
and cluster analyses in the paper? Of note, my dataset isn?t large and includes between 15-20 samples (clusters) while around 50-60% have multiple effect sizes. With regards to the second question in the original email, we computer the standard error of measurement (usually attained from pooled SD of test- retest multiply the square root of 1-icc). Practically, these effect sizes are sd values. I haven?t seen enough meta-analysis studies using standard error of measurement as effect size and I speculate if you can suggest me what would be a decent approach for this? Cheers, On Thu, 6 Aug 2020 at 22:30, Viechtbauer, Wolfgang (SP) <wolfgang.viechtbauer at maastrichtuniversity.nl> wrote: Dear Tzlil, Unless you have good reasons to do so, do not use custom weights. rma.mv
()
uses weights and the default ones are usually fine. weights(res, type="rowsum") will only (currently) work in the 'devel' version of metafor, which you can install as described here: https://wviechtb.github.io/metafor/#installation I can't really comment on the second question, because answering this
would
require knowing all details of what is being computed/reported.
As for the last question ("is there a straightforward way in metafor to
specify the analysis with Chi-square values"): No, chi-square values are
test statistics, not an effect size / outcome measure, so they cannot be
used for a meta-analysis (at least not with metafor).
Best,
Wolfgang
-----Original Message----- From: R-sig-meta-analysis [mailto:r-sig-meta-analysis-bounces at r-
project.org]
On Behalf Of Tzlil Shushan Sent: Wednesday, 05 August, 2020 5:45 To: r-sig-meta-analysis at r-project.org Subject: [R-meta] Performing a multilevel meta-analysis Hi R legends! My name is Tzlil and I'm a PhD candidate in Sport Science - Human performance science and sports analytics I'm currently working on a multilevel meta-analysis using the metafor package. My first question is around the methods used to assign weights within
rma.mv
models. I'd like to know if there is a conventional or 'most conservative'
approach
to continue with. Since I haven't found a consistent methodology within
the
multilevel meta-analyses papers I read, I originally applied a weight
which
pertains to variance (vi) and number of effect sizes from the same
study. I
found this method in a lecture by Joshua R. Polanin https://www.youtube.com/watch?v=rJjeRRf23L8&t=1719s from 28:00. W = 1/vi, then divided by the number of ES for a study for example, a study with vi = 0.0402 and 2 different ES will weight as follow; 1/0.0402 = 24.88, then 24.88/2 = 12.44 (finally, converting into percentages based on the overall weights in the analysis) After I've read some of the great posts provided in last threads here
such
and
https://www.jepusto.com/weighting-in-multivariate-meta-analysis/ I wonder if it is not correct and I need to modify the way I use weights
in
my model.. For some reason, I tried to imitate the approach used in the first link above. However, for some reason I get an error every time I tried to specify weights(res, type="rowsum") *Error in match.arg(type,
c("diagonal",
"matrix")) : 'arg' should be one of ?diagonal?, ?matrix?* My second question is related to the way I meta-analyse a specific ES. My meta-analysis involves the reliability and convergent validity of heart rate during a specific task, which is measured in relative values (i.e. percentages). Therefore, my meta-analysis includes four different ESs parameters (mean difference; MD, interclass correlation; ICC, standard error of measurement; SEM, and correlation coefficient; r). I wonder how I need to use SEM before starting the analysis. I've seen
some
papers which squared and log transformed the SEM before performing a meta-analysis, while others converted the SEM into CV%. Due to the
original
scale of our ES (which is already in percentages) I'd like to perform the analysis without converting it into CV% values. Should I use the SEM as
the
reported values? only log transformed it? Further, is there a straightforward way in metafor to specify the analysis with Chi-square values (as "ZCOR" in correlations)? Thanks in advance! Kind regards, Tzlil Shushan | Sport Scientist, Physical Preparation Coach BEd Physical Education and Exercise Science MSc Exercise Science - High Performance Sports: Strength & Conditioning, CSCS PhD Candidate Human Performance Science & Sports Analytics
2 days later
Dear Tzlil, Just to let you know (so you don't keep waiting for a response from me): I have no suggestions for how one would meta-analyze such values. Best, Wolfgang
-----Original Message----- From: Tzlil Shushan [mailto:tzlil21092 at gmail.com] Sent: Saturday, 15 August, 2020 5:10 To: Viechtbauer, Wolfgang (SP) Cc: r-sig-meta-analysis at r-project.org Subject: Re: [R-meta] Performing a multilevel meta-analysis Dear Wolfgang, First, thank you so much for the quick response and the time you dedicate to my questions. And yes, I looked on the mailing list and have seen some meaningful discussions around some of my questions. Based on the readings, I assume that an extension of my multilevel model with robust variance inference is a good idea. However, I still would like to give a chance to the second question I had and I'll try to be more specific this time.?I hope you (or others in this group) can help me with that. One of the effect sizes in the meta-analysis is the 'standard error of measurement' (SEM) of heart rate from a test-retest (reliability) assessment. Simply described, this assessment was performed twice on a matched group and I'm interested in the variability of this measure. This effect size is derived from the pooled standard deviation (mean test-retest SD) and intraclass correlation (ICC) of a test-retest. For example, if the mean ? SD of test one is 80.0 ? 4.0 and test two is 80.5 ? 4.8, and intraclass correlation is 0.95, the SEM will be 4.4*?(1-0.95)= 0.98. Practically, this effect size is a form of SD value. I'm aware of the fact that the first thing that I probably should do if I want to use metafor package is to convert these values into coefficient of variation (CV%). However, because the outcome measure (heart rate) is already calculated?in percentages values (% of heart rate maximum), we'd like to meta-analyse the SEM in the original raw values. Further, using this effect size is important for having practical implications in the paper. I've seen some discussion in the mailing list?https://stat.ethz.ch/pipermail/r-sig-meta-analysis/2018- May/000828.html?fbclid=IwAR2dSpruCCqlk631VKBAflkibrD8Gke- 9sSGgMHxG4TtY_ocZX1IsZCPlI0?on CV% from matched groups with?escalc(measure="CVRC",?y = logCV_1 - logCV_2). However, I'd like to know if there is a way to fit the escalc equation to the SEM values (which is only one value from each paired test)? or alternatively, if there are other approaches I should consider? Kind regards, Tzlil Shushan |?Sport Scientist, Physical Preparation Coach BEd Physical Education and Exercise Science MSc Exercise Science - High Performance Sports: Strength & Conditioning,?CSCS PhD Candidate Human Performance Science & Sports Analytics ??????? ??? ??, 12 ????? 2020 ?-4:46 ??? ?Viechtbauer, Wolfgang (SP)?? <?wolfgang.viechtbauer at maastrichtuniversity.nl??>:? Dear Tzlil, Your questions are a bit too general for me to give meaningful answers. Also, some of your questions (with regard to modeling dependent effects and using cluster robust methods) have been extensively discussed on this mailing list, so no need to repeat all of that. But yes, if you use cluster robust inference methods, I would use them not just for the 'overall model' but also for models including moderators. Best, Wolfgang
-----Original Message----- From: Tzlil Shushan [mailto:tzlil21092 at gmail.com] Sent: Thursday, 06 August, 2020 16:05 To: Viechtbauer, Wolfgang (SP) Cc: r-sig-meta-analysis at r-project.org Subject: Re: [R-meta] Performing a multilevel meta-analysis Dear?Wolfgang, Thanks for your quick reply and sorry in advance for the long ?assay?.. It is probably be better if I give an overview on my analysis. Generally, I employ meta-analysis on the reliability and validity of heart rate response during sub-maximal assessments. We were able to compute three different effect sizes reflects reliability; mean differences, ICC and standard error of measurement of test-retest design, while for validity, we computer correlation coefficient between heart rate values and maximal aerobic fitness. Since both measurement properties (i.e reliability/validity) of heart rate can be analysed from different intensities during the assessment (for example, 70, 80 and 90% from heart rate maximum), different modalities of tests (e.g running, cycling), and multiple time points across the year
(e.g.
before season, in-season), one sample can have more than one effect size. I decided to employ three level meta-analysis, while level two and three pertaining to within and between samples variance, respectively. Then, include moderators effect within and between samples). Regarding the weights, the only reason I wonder if I need to adjust them is because the wide range of effect sizes per sample (1-4 per sample) and thought to use the approach you discussed in your recent post here. http://www.metafor-project.org/doku.php/tips:weights_in_rma.mv_models However, as I understand the default W in rma.mv will work quite well? With regards to the above (i.e multiple effect sizes for samples), I consider to add robust cluster test to get more accurate standard error values. As I understand, it may be a good option to control for the natural (unknown) correlations between effect sizes from the same sample. First, do you think it is necessary? If so, would you apply cluster test just to the overall model or for additional models including moderators. Second, Is it reasonable to report the results obtained from the multilevel and cluster analyses in the paper? Of note, my dataset isn?t large and includes between 15-20 samples (clusters) while around 50-60% have multiple effect sizes. With regards to the second question in the original email, we computer the standard error of measurement (usually attained from pooled SD of test- retest multiply the square root of 1-icc). Practically, these effect sizes are sd values. I haven?t seen enough meta-analysis studies using standard error of measurement as effect size and I speculate if you can suggest me what would be a decent approach for this? Cheers, On Thu, 6 Aug 2020 at 22:30, Viechtbauer, Wolfgang (SP) <wolfgang.viechtbauer at maastrichtuniversity.nl> wrote: Dear Tzlil, Unless you have good reasons to do so, do not use custom weights. rma.mv() uses weights and the default ones are usually fine. weights(res, type="rowsum") will only (currently) work in the 'devel' version of metafor, which you can install as described here: https://wviechtb.github.io/metafor/#installation I can't really comment on the second question, because answering this would require knowing all details of what is being computed/reported. As for the last question ("is there a straightforward way in metafor to specify the analysis with Chi-square values"): No, chi-square values are test statistics, not an effect size / outcome measure, so they cannot be used for a meta-analysis (at least not with metafor). Best, Wolfgang
-----Original Message----- From: R-sig-meta-analysis [mailto:r-sig-meta-analysis-bounces at r-
project.org]
On Behalf Of Tzlil Shushan Sent: Wednesday, 05 August, 2020 5:45 To: r-sig-meta-analysis at r-project.org Subject: [R-meta] Performing a multilevel meta-analysis Hi R legends! My name is Tzlil and I'm a PhD candidate in Sport Science - Human performance science and sports analytics I'm currently working on a? multilevel meta-analysis using the metafor package. My first question is around the methods used to assign weights within
rma.mv
models. I'd like to know if there is a conventional or 'most conservative'
approach
to continue with. Since I haven't found a consistent methodology within
the
multilevel meta-analyses papers I read, I originally applied a weight
which
pertains to variance (vi) and number of effect sizes from the same study.
I
found this method in a lecture by Joshua R. Polanin https://www.youtube.com/watch?v=rJjeRRf23L8&t=1719s from 28:00. W = 1/vi, then divided by the number of ES for a study for example, a study with vi = 0.0402 and 2 different ES will weight as follow; 1/0.0402 = 24.88, then 24.88/2 = 12.44 (finally, converting into percentages based on the overall weights in the analysis) After I've read some of the great posts provided in last threads here such as; http://www.metafor-project.org/doku.php/tips:weights_in_rma.mv_models and https://www.jepusto.com/weighting-in-multivariate-meta-analysis/ I wonder if it is not correct and I need to modify the way I use weights
in
my model.. For some reason, I tried to imitate the approach used in the first link above. However, for some reason I get an error every time I tried to specify weights(res, type="rowsum") *Error in match.arg(type,
c("diagonal",
"matrix")) : 'arg' should be one of ?diagonal?, ?matrix?* My second question is related to the way I meta-analyse a specific ES. My meta-analysis involves the reliability and convergent validity of heart rate during a specific task, which is measured in relative values (i.e. percentages). Therefore, my meta-analysis includes four different ESs parameters (mean difference; MD, interclass correlation; ICC, standard error of measurement; SEM, and correlation coefficient; r). I wonder how I need to use SEM before starting the analysis. I've seen
some
papers which squared and log transformed the SEM before performing a meta-analysis, while others converted the SEM into CV%. Due to the
original
scale of our ES (which is already in percentages) I'd like to perform the analysis without converting it into CV% values. Should I use the SEM as
the
reported values? only log transformed it? Further, is there a straightforward way? in metafor to specify the analysis with Chi-square values (as "ZCOR" in correlations)? Thanks in advance! Kind regards, Tzlil Shushan | Sport Scientist, Physical Preparation Coach BEd Physical Education and Exercise Science MSc Exercise Science - High Performance Sports: Strength & Conditioning, CSCS PhD Candidate Human Performance Science & Sports Analytics
Dear Tzlil, The SEM is a standard deviation of the change between trials/tests. Two previous meta-analyses analyzed this type of effect using the SAS software. (see Hopkins WG, Schabort EJ, Hawley JA. Reliability of power in physical performance tests. Sport Med. 2001;31:211?34. and Gore CJ, Hopkins WG, Burge CM. Errors of measurement for blood volume parameters: a meta-analysis. J Appl Physiol. 2005;99:1745?58). As SEM is an SD, I suggest analyze it with a logarithmic transformation (SDLN escalc function in metafor; https://rdrr.io/cran/metafor/man/escalc.html) discussed in Nakagawa et al. ( https://besjournals.onlinelibrary.wiley.com/doi/full/10.1111/2041-210X.12309) equations 7 and 8. Best regards. Fernando Em seg., 17 de ago. de 2020 ?s 17:43, Viechtbauer, Wolfgang (SP) < wolfgang.viechtbauer at maastrichtuniversity.nl> escreveu:
Dear Tzlil, Just to let you know (so you don't keep waiting for a response from me): I have no suggestions for how one would meta-analyze such values. Best, Wolfgang
-----Original Message----- From: Tzlil Shushan [mailto:tzlil21092 at gmail.com] Sent: Saturday, 15 August, 2020 5:10 To: Viechtbauer, Wolfgang (SP) Cc: r-sig-meta-analysis at r-project.org Subject: Re: [R-meta] Performing a multilevel meta-analysis Dear Wolfgang, First, thank you so much for the quick response and the time you dedicate
to
my questions. And yes, I looked on the mailing list and have seen some meaningful discussions around some of my questions. Based on the
readings, I
assume that an extension of my multilevel model with robust variance inference is a good idea. However, I still would like to give a chance to the second question I had and I'll try to be more specific this time. I hope you (or others in this group) can help me with that. One of the effect sizes in the meta-analysis is the 'standard error of measurement' (SEM) of heart rate from a test-retest (reliability) assessment. Simply described, this assessment was performed twice on a matched group and I'm interested in the variability of this measure. This effect size is derived from the pooled standard deviation (mean
test-retest
SD) and intraclass correlation (ICC) of a test-retest. For example, if the mean ? SD of test one is 80.0 ? 4.0 and test two is 80.5 ? 4.8, and intraclass correlation is 0.95, the SEM will be 4.4*?(1-0.95)= 0.98. Practically, this effect size is a form of SD value. I'm aware of the fact that the first thing that I probably should do if I want to use metafor package is to convert these values into coefficient of variation (CV%). However, because the outcome measure (heart rate) is already calculated in percentages values (% of heart rate maximum), we'd like to meta-analyse the SEM in the original raw values. Further, using
this
effect size is important for having practical implications in the paper. I've seen some discussion in the mailing list https://stat.ethz.ch/pipermail/r-sig-meta-analysis/2018- May/000828.html?fbclid=IwAR2dSpruCCqlk631VKBAflkibrD8Gke- 9sSGgMHxG4TtY_ocZX1IsZCPlI0 on CV% from matched groups with escalc(measure="CVRC", y = logCV_1 - logCV_2). However, I'd like to know if there is a way to fit the escalc equation to the SEM values (which is only one value from each paired test)? or alternatively, if there are other approaches I should consider? Kind regards, Tzlil Shushan | Sport Scientist, Physical Preparation Coach BEd Physical Education and Exercise Science MSc Exercise Science - High Performance Sports: Strength & Conditioning, CSCS PhD Candidate Human Performance Science & Sports Analytics ??????? ??? ??, 12 ????? 2020 ?-4:46 ??? ?Viechtbauer, Wolfgang (SP)?? <?wolfgang.viechtbauer at maastrichtuniversity.nl??>:? Dear Tzlil, Your questions are a bit too general for me to give meaningful answers. Also, some of your questions (with regard to modeling dependent effects
and
using cluster robust methods) have been extensively discussed on this mailing list, so no need to repeat all of that. But yes, if you use
cluster
robust inference methods, I would use them not just for the 'overall
model'
but also for models including moderators. Best, Wolfgang
-----Original Message----- From: Tzlil Shushan [mailto:tzlil21092 at gmail.com] Sent: Thursday, 06 August, 2020 16:05 To: Viechtbauer, Wolfgang (SP) Cc: r-sig-meta-analysis at r-project.org Subject: Re: [R-meta] Performing a multilevel meta-analysis Dear Wolfgang, Thanks for your quick reply and sorry in advance for the long ?assay?.. It is probably be better if I give an overview on my analysis.
Generally, I
employ meta-analysis on the reliability and validity of heart rate
response
during sub-maximal assessments. We were able to compute three different effect sizes reflects reliability; mean differences, ICC and standard
error
of measurement of test-retest design, while for validity, we computer correlation coefficient between heart rate values and maximal aerobic fitness. Since both measurement properties (i.e reliability/validity) of heart
rate
can be analysed from different intensities during the assessment (for example, 70, 80 and 90% from heart rate maximum), different modalities of tests (e.g running, cycling), and multiple time points across the year
(e.g.
before season, in-season), one sample can have more than one effect size. I decided to employ three level meta-analysis, while level two and three pertaining to within and between samples variance, respectively. Then, include moderators effect within and between samples). Regarding the weights, the only reason I wonder if I need to adjust them
is
because the wide range of effect sizes per sample (1-4 per sample) and thought to use the approach you discussed in your recent post here. http://www.metafor-project.org/doku.php/tips:weights_in_rma.mv_models However, as I understand the default W in rma.mv will work quite well? With regards to the above (i.e multiple effect sizes for samples), I consider to add robust cluster test to get more accurate standard error values. As I understand, it may be a good option to control for the
natural
(unknown) correlations between effect sizes from the same sample. First, do you think it is necessary? If so, would you apply cluster test just to the overall model or for additional models including moderators. Second, Is it reasonable to report the results obtained from the
multilevel
and cluster analyses in the paper? Of note, my dataset isn?t large and includes between 15-20 samples (clusters) while around 50-60% have multiple effect sizes. With regards to the second question in the original email, we computer
the
standard error of measurement (usually attained from pooled SD of test- retest multiply the square root of 1-icc). Practically, these effect
sizes
are sd values. I haven?t seen enough meta-analysis studies using standard error of measurement as effect size and I speculate if you can suggest me what would be a decent approach for this? Cheers, On Thu, 6 Aug 2020 at 22:30, Viechtbauer, Wolfgang (SP) <wolfgang.viechtbauer at maastrichtuniversity.nl> wrote: Dear Tzlil, Unless you have good reasons to do so, do not use custom weights. rma.mv
()
uses weights and the default ones are usually fine. weights(res, type="rowsum") will only (currently) work in the 'devel' version of metafor, which you can install as described here: https://wviechtb.github.io/metafor/#installation I can't really comment on the second question, because answering this
would
require knowing all details of what is being computed/reported.
As for the last question ("is there a straightforward way in metafor to
specify the analysis with Chi-square values"): No, chi-square values are
test statistics, not an effect size / outcome measure, so they cannot be
used for a meta-analysis (at least not with metafor).
Best,
Wolfgang
-----Original Message----- From: R-sig-meta-analysis [mailto:r-sig-meta-analysis-bounces at r-
project.org]
On Behalf Of Tzlil Shushan Sent: Wednesday, 05 August, 2020 5:45 To: r-sig-meta-analysis at r-project.org Subject: [R-meta] Performing a multilevel meta-analysis Hi R legends! My name is Tzlil and I'm a PhD candidate in Sport Science - Human performance science and sports analytics I'm currently working on a multilevel meta-analysis using the metafor package. My first question is around the methods used to assign weights within
rma.mv
models. I'd like to know if there is a conventional or 'most conservative'
approach
to continue with. Since I haven't found a consistent methodology within
the
multilevel meta-analyses papers I read, I originally applied a weight
which
pertains to variance (vi) and number of effect sizes from the same
study.
I
found this method in a lecture by Joshua R. Polanin https://www.youtube.com/watch?v=rJjeRRf23L8&t=1719s from 28:00. W = 1/vi, then divided by the number of ES for a study for example, a study with vi = 0.0402 and 2 different ES will weight as follow; 1/0.0402 = 24.88, then 24.88/2 = 12.44 (finally, converting into percentages based on the overall weights in the analysis) After I've read some of the great posts provided in last threads here
such
and
https://www.jepusto.com/weighting-in-multivariate-meta-analysis/ I wonder if it is not correct and I need to modify the way I use weights
in
my model.. For some reason, I tried to imitate the approach used in the first link above. However, for some reason I get an error every time I tried to specify weights(res, type="rowsum") *Error in match.arg(type,
c("diagonal",
"matrix")) : 'arg' should be one of ?diagonal?, ?matrix?* My second question is related to the way I meta-analyse a specific ES.
My
meta-analysis involves the reliability and convergent validity of heart rate during a specific task, which is measured in relative values (i.e. percentages). Therefore, my meta-analysis includes four different ESs parameters (mean difference; MD, interclass correlation; ICC, standard error of measurement; SEM, and correlation coefficient; r). I wonder how I need to use SEM before starting the analysis. I've seen
some
papers which squared and log transformed the SEM before performing a meta-analysis, while others converted the SEM into CV%. Due to the
original
scale of our ES (which is already in percentages) I'd like to perform
the
analysis without converting it into CV% values. Should I use the SEM as
the
reported values? only log transformed it? Further, is there a straightforward way in metafor to specify the analysis with Chi-square values (as "ZCOR" in correlations)? Thanks in advance! Kind regards, Tzlil Shushan | Sport Scientist, Physical Preparation Coach BEd Physical Education and Exercise Science MSc Exercise Science - High Performance Sports: Strength & Conditioning, CSCS PhD Candidate Human Performance Science & Sports Analytics
_______________________________________________ R-sig-meta-analysis mailing list R-sig-meta-analysis at r-project.org https://stat.ethz.ch/mailman/listinfo/r-sig-meta-analysis
1 day later
Dear Wolfgang and Fernando, Woflgang, thanks for letting me know.. Fernando, thanks for your answer, I wanted to have some time working with "SDLN" function you suggested before commenting again. I'm familiar with those papers that investigated SEM, thanks for sending them over. Since you already mentioned the "SDLN" function I have two questions; 1) If I want to proceed with log transformation of SEM effect sizes, Do I need to specify log() for the yi value? *res <- escalc(measure = "SDLN", yi = log(sem), vi , data = dat)*? 2) Because it is hard to obtain the sampling variance for each individual study (some reported CI and some not), What function should I use to compute the sampling variance? is 1/(n-3) works fine in this case? If I be able to compute the estimated standard error from individual studies based on their confidence intervals: (CI upper - CI lower)/3.92 for 95% CI, then specify sei within the escalc function to compute the variance. Does this approach serve better estimation for the model? Kind regards, Tzlil Shushan | Sport Scientist, Physical Preparation Coach BEd Physical Education and Exercise Science MSc Exercise Science - High Performance Sports: Strength & Conditioning, CSCS PhD Candidate Human Performance Science & Sports Analytics ??????? ??? ??, 18 ????? 2020 ?-7:05 ??? ?Fernando Klitzke Borszcz?? <? fernandoborszcz at gmail.com??>:?
Dear Tzlil, The SEM is a standard deviation of the change between trials/tests. Two previous meta-analyses analyzed this type of effect using the SAS software. (see Hopkins WG, Schabort EJ, Hawley JA. Reliability of power in physical performance tests. Sport Med. 2001;31:211?34. and Gore CJ, Hopkins WG, Burge CM. Errors of measurement for blood volume parameters: a meta-analysis. J Appl Physiol. 2005;99:1745?58). As SEM is an SD, I suggest analyze it with a logarithmic transformation (SDLN escalc function in metafor; https://rdrr.io/cran/metafor/man/escalc.html) discussed in Nakagawa et al. ( https://besjournals.onlinelibrary.wiley.com/doi/full/10.1111/2041-210X.12309) equations 7 and 8. Best regards. Fernando Em seg., 17 de ago. de 2020 ?s 17:43, Viechtbauer, Wolfgang (SP) < wolfgang.viechtbauer at maastrichtuniversity.nl> escreveu:
Dear Tzlil, Just to let you know (so you don't keep waiting for a response from me): I have no suggestions for how one would meta-analyze such values. Best, Wolfgang
-----Original Message----- From: Tzlil Shushan [mailto:tzlil21092 at gmail.com] Sent: Saturday, 15 August, 2020 5:10 To: Viechtbauer, Wolfgang (SP) Cc: r-sig-meta-analysis at r-project.org Subject: Re: [R-meta] Performing a multilevel meta-analysis Dear Wolfgang, First, thank you so much for the quick response and the time you
dedicate to
my questions. And yes, I looked on the mailing list and have seen some meaningful discussions around some of my questions. Based on the
readings, I
assume that an extension of my multilevel model with robust variance inference is a good idea. However, I still would like to give a chance to the second question I had and I'll try to be more specific this time. I hope you (or others in this group) can help me with that. One of the effect sizes in the meta-analysis is the 'standard error of measurement' (SEM) of heart rate from a test-retest (reliability) assessment. Simply described, this assessment was performed twice on a matched group and I'm interested in the variability of this measure. This effect size is derived from the pooled standard deviation (mean
test-retest
SD) and intraclass correlation (ICC) of a test-retest. For example, if
the
mean ? SD of test one is 80.0 ? 4.0 and test two is 80.5 ? 4.8, and intraclass correlation is 0.95, the SEM will be 4.4*?(1-0.95)= 0.98. Practically, this effect size is a form of SD value. I'm aware of the fact that the first thing that I probably should do if I want to use metafor package is to convert these values into coefficient
of
variation (CV%). However, because the outcome measure (heart rate) is already calculated in percentages values (% of heart rate maximum), we'd like to meta-analyse the SEM in the original raw values. Further, using
this
effect size is important for having practical implications in the paper. I've seen some discussion in the mailing list https://stat.ethz.ch/pipermail/r-sig-meta-analysis/2018- May/000828.html?fbclid=IwAR2dSpruCCqlk631VKBAflkibrD8Gke- 9sSGgMHxG4TtY_ocZX1IsZCPlI0 on CV% from matched groups with escalc(measure="CVRC", y = logCV_1 - logCV_2). However, I'd like to know if there is a way to fit the escalc equation to the SEM values
(which
is only one value from each paired test)? or alternatively, if there are other approaches I should consider? Kind regards, Tzlil Shushan | Sport Scientist, Physical Preparation Coach BEd Physical Education and Exercise Science MSc Exercise Science - High Performance Sports: Strength & Conditioning, CSCS PhD Candidate Human Performance Science & Sports Analytics ??????? ??? ??, 12 ????? 2020 ?-4:46 ??? ?Viechtbauer, Wolfgang (SP)?? <?wolfgang.viechtbauer at maastrichtuniversity.nl??>:? Dear Tzlil, Your questions are a bit too general for me to give meaningful answers. Also, some of your questions (with regard to modeling dependent effects
and
using cluster robust methods) have been extensively discussed on this mailing list, so no need to repeat all of that. But yes, if you use
cluster
robust inference methods, I would use them not just for the 'overall
model'
but also for models including moderators. Best, Wolfgang
-----Original Message----- From: Tzlil Shushan [mailto:tzlil21092 at gmail.com] Sent: Thursday, 06 August, 2020 16:05 To: Viechtbauer, Wolfgang (SP) Cc: r-sig-meta-analysis at r-project.org Subject: Re: [R-meta] Performing a multilevel meta-analysis Dear Wolfgang, Thanks for your quick reply and sorry in advance for the long ?assay?.. It is probably be better if I give an overview on my analysis.
Generally, I
employ meta-analysis on the reliability and validity of heart rate
response
during sub-maximal assessments. We were able to compute three different effect sizes reflects reliability; mean differences, ICC and standard
error
of measurement of test-retest design, while for validity, we computer correlation coefficient between heart rate values and maximal aerobic fitness. Since both measurement properties (i.e reliability/validity) of heart
rate
can be analysed from different intensities during the assessment (for example, 70, 80 and 90% from heart rate maximum), different modalities
of
tests (e.g running, cycling), and multiple time points across the year
(e.g.
before season, in-season), one sample can have more than one effect
size.
I decided to employ three level meta-analysis, while level two and three pertaining to within and between samples variance, respectively. Then, include moderators effect within and between samples). Regarding the weights, the only reason I wonder if I need to adjust
them is
because the wide range of effect sizes per sample (1-4 per sample) and thought to use the approach you discussed in your recent post here. http://www.metafor-project.org/doku.php/tips:weights_in_rma.mv_models However, as I understand the default W in rma.mv will work quite well? With regards to the above (i.e multiple effect sizes for samples), I consider to add robust cluster test to get more accurate standard error values. As I understand, it may be a good option to control for the
natural
(unknown) correlations between effect sizes from the same sample. First, do you think it is necessary? If so, would you apply cluster test just to the overall model or for additional models including moderators. Second, Is it reasonable to report the results obtained from the
multilevel
and cluster analyses in the paper? Of note, my dataset isn?t large and includes between 15-20 samples (clusters) while around 50-60% have multiple effect sizes. With regards to the second question in the original email, we computer
the
standard error of measurement (usually attained from pooled SD of test- retest multiply the square root of 1-icc). Practically, these effect
sizes
are sd values. I haven?t seen enough meta-analysis studies using
standard
error of measurement as effect size and I speculate if you can suggest
me
what would be a decent approach for this? Cheers, On Thu, 6 Aug 2020 at 22:30, Viechtbauer, Wolfgang (SP) <wolfgang.viechtbauer at maastrichtuniversity.nl> wrote: Dear Tzlil, Unless you have good reasons to do so, do not use custom weights.
rma.mv()
uses weights and the default ones are usually fine. weights(res, type="rowsum") will only (currently) work in the 'devel' version of metafor, which you can install as described here: https://wviechtb.github.io/metafor/#installation I can't really comment on the second question, because answering this
would
require knowing all details of what is being computed/reported.
As for the last question ("is there a straightforward way in metafor to
specify the analysis with Chi-square values"): No, chi-square values are
test statistics, not an effect size / outcome measure, so they cannot be
used for a meta-analysis (at least not with metafor).
Best,
Wolfgang
-----Original Message----- From: R-sig-meta-analysis [mailto:r-sig-meta-analysis-bounces at r-
project.org]
On Behalf Of Tzlil Shushan Sent: Wednesday, 05 August, 2020 5:45 To: r-sig-meta-analysis at r-project.org Subject: [R-meta] Performing a multilevel meta-analysis Hi R legends! My name is Tzlil and I'm a PhD candidate in Sport Science - Human performance science and sports analytics I'm currently working on a multilevel meta-analysis using the metafor package. My first question is around the methods used to assign weights within
rma.mv
models. I'd like to know if there is a conventional or 'most conservative'
approach
to continue with. Since I haven't found a consistent methodology within
the
multilevel meta-analyses papers I read, I originally applied a weight
which
pertains to variance (vi) and number of effect sizes from the same
study.
I
found this method in a lecture by Joshua R. Polanin https://www.youtube.com/watch?v=rJjeRRf23L8&t=1719s from 28:00. W = 1/vi, then divided by the number of ES for a study for example, a study with vi = 0.0402 and 2 different ES will weight as follow; 1/0.0402 = 24.88, then 24.88/2 = 12.44 (finally, converting into percentages based on the overall weights in the analysis) After I've read some of the great posts provided in last threads here
such
and
https://www.jepusto.com/weighting-in-multivariate-meta-analysis/ I wonder if it is not correct and I need to modify the way I use
weights
in
my model.. For some reason, I tried to imitate the approach used in the first link above. However, for some reason I get an error every time I tried to specify weights(res, type="rowsum") *Error in match.arg(type,
c("diagonal",
"matrix")) : 'arg' should be one of ?diagonal?, ?matrix?* My second question is related to the way I meta-analyse a specific ES.
My
meta-analysis involves the reliability and convergent validity of heart rate during a specific task, which is measured in relative values (i.e. percentages). Therefore, my meta-analysis includes four different ESs parameters (mean difference; MD, interclass correlation; ICC, standard error of measurement; SEM, and correlation coefficient; r). I wonder how I need to use SEM before starting the analysis. I've seen
some
papers which squared and log transformed the SEM before performing a meta-analysis, while others converted the SEM into CV%. Due to the
original
scale of our ES (which is already in percentages) I'd like to perform
the
analysis without converting it into CV% values. Should I use the SEM as
the
reported values? only log transformed it? Further, is there a straightforward way in metafor to specify the analysis with Chi-square values (as "ZCOR" in correlations)? Thanks in advance! Kind regards, Tzlil Shushan | Sport Scientist, Physical Preparation Coach BEd Physical Education and Exercise Science MSc Exercise Science - High Performance Sports: Strength & Conditioning, CSCS PhD Candidate Human Performance Science & Sports Analytics
_______________________________________________ R-sig-meta-analysis mailing list R-sig-meta-analysis at r-project.org https://stat.ethz.ch/mailman/listinfo/r-sig-meta-analysis
Dear Wolfgang and Fernando, Apologise for the multiple emails, but I just figured out that my last questions were probably unnecessary.. After I read this ?measures for quantitative variables? section? https://wviechtb.github.io/metafor/reference/escalc.html I finally understood that I probably need to specify the SEM values as sdi and sample size as ni in the model. res -> escalc(measure = ?SDLN?, sdi = sem, ni, data = dat) That?s right? Thanks and kind regards,
On Wed, 19 Aug 2020 at 21:28, Tzlil Shushan <tzlil21092 at gmail.com> wrote:
Dear Wolfgang and Fernando, Woflgang, thanks for letting me know.. Fernando, thanks for your answer, I wanted to have some time working with "SDLN" function you suggested before commenting again. I'm familiar with those papers that investigated SEM, thanks for sending them over. Since you already mentioned the "SDLN" function I have two questions; 1) If I want to proceed with log transformation of SEM effect sizes, Do I need to specify log() for the yi value? *res <- escalc(measure = "SDLN", yi = log(sem), vi , data = dat)*? 2) Because it is hard to obtain the sampling variance for each individual study (some reported CI and some not), What function should I use to compute the sampling variance? is 1/(n-3) works fine in this case? If I be able to compute the estimated standard error from individual studies based on their confidence intervals: (CI upper - CI lower)/3.92 for 95% CI, then specify sei within the escalc function to compute the variance. Does this approach serve better estimation for the model? Kind regards, Tzlil Shushan | Sport Scientist, Physical Preparation Coach BEd Physical Education and Exercise Science MSc Exercise Science - High Performance Sports: Strength & Conditioning, CSCS PhD Candidate Human Performance Science & Sports Analytics ??????? ??? ??, 18 ????? 2020 ?-7:05 ??? ?Fernando Klitzke Borszcz?? <? fernandoborszcz at gmail.com??>:?
Dear Tzlil, The SEM is a standard deviation of the change between trials/tests. Two previous meta-analyses analyzed this type of effect using the SAS software. (see Hopkins WG, Schabort EJ, Hawley JA. Reliability of power in physical performance tests. Sport Med. 2001;31:211?34. and Gore CJ, Hopkins WG, Burge CM. Errors of measurement for blood volume parameters: a meta-analysis. J Appl Physiol. 2005;99:1745?58). As SEM is an SD, I suggest analyze it with a logarithmic transformation (SDLN escalc function in metafor; https://rdrr.io/cran/metafor/man/escalc.html) discussed in Nakagawa et al. ( https://besjournals.onlinelibrary.wiley.com/doi/full/10.1111/2041-210X.12309) equations 7 and 8. Best regards. Fernando Em seg., 17 de ago. de 2020 ?s 17:43, Viechtbauer, Wolfgang (SP) < wolfgang.viechtbauer at maastrichtuniversity.nl> escreveu:
Dear Tzlil, Just to let you know (so you don't keep waiting for a response from me): I have no suggestions for how one would meta-analyze such values. Best, Wolfgang
-----Original Message----- From: Tzlil Shushan [mailto:tzlil21092 at gmail.com] Sent: Saturday, 15 August, 2020 5:10 To: Viechtbauer, Wolfgang (SP) Cc: r-sig-meta-analysis at r-project.org Subject: Re: [R-meta] Performing a multilevel meta-analysis Dear Wolfgang, First, thank you so much for the quick response and the time you
dedicate to
my questions. And yes, I looked on the mailing list and have seen some meaningful discussions around some of my questions. Based on the
readings, I
assume that an extension of my multilevel model with robust variance inference is a good idea. However, I still would like to give a chance to the second question I
had
and I'll try to be more specific this time. I hope you (or others in
this
group) can help me with that. One of the effect sizes in the meta-analysis is the 'standard error of measurement' (SEM) of heart rate from a test-retest (reliability) assessment. Simply described, this assessment was performed twice on a matched group and I'm interested in the variability of this measure.
This
effect size is derived from the pooled standard deviation (mean
test-retest
SD) and intraclass correlation (ICC) of a test-retest. For example, if
the
mean ? SD of test one is 80.0 ? 4.0 and test two is 80.5 ? 4.8, and intraclass correlation is 0.95, the SEM will be 4.4*?(1-0.95)= 0.98. Practically, this effect size is a form of SD value. I'm aware of the fact that the first thing that I probably should do if
I
want to use metafor package is to convert these values into coefficient
of
variation (CV%). However, because the outcome measure (heart rate) is already calculated in percentages values (% of heart rate maximum), we'd like to meta-analyse the SEM in the original raw values. Further, using
this
effect size is important for having practical implications in the paper. I've seen some discussion in the mailing list https://stat.ethz.ch/pipermail/r-sig-meta-analysis/2018- May/000828.html?fbclid=IwAR2dSpruCCqlk631VKBAflkibrD8Gke- 9sSGgMHxG4TtY_ocZX1IsZCPlI0 on CV% from matched groups with escalc(measure="CVRC", y = logCV_1 - logCV_2). However, I'd like to know if there is a way to fit the escalc equation to the SEM values
(which
is only one value from each paired test)? or alternatively, if there are other approaches I should consider? Kind regards, Tzlil Shushan | Sport Scientist, Physical Preparation Coach BEd Physical Education and Exercise Science MSc Exercise Science - High Performance Sports: Strength & Conditioning, CSCS PhD Candidate Human Performance Science & Sports Analytics ??????? ??? ??, 12 ????? 2020 ?-4:46 ??? ?Viechtbauer, Wolfgang (SP)?? <?wolfgang.viechtbauer at maastrichtuniversity.nl??>:? Dear Tzlil, Your questions are a bit too general for me to give meaningful answers. Also, some of your questions (with regard to modeling dependent effects
and
using cluster robust methods) have been extensively discussed on this mailing list, so no need to repeat all of that. But yes, if you use
cluster
robust inference methods, I would use them not just for the 'overall
model'
but also for models including moderators. Best, Wolfgang
-----Original Message----- From: Tzlil Shushan [mailto:tzlil21092 at gmail.com] Sent: Thursday, 06 August, 2020 16:05 To: Viechtbauer, Wolfgang (SP) Cc: r-sig-meta-analysis at r-project.org Subject: Re: [R-meta] Performing a multilevel meta-analysis Dear Wolfgang, Thanks for your quick reply and sorry in advance for the long ?assay?.. It is probably be better if I give an overview on my analysis.
Generally, I
employ meta-analysis on the reliability and validity of heart rate
response
during sub-maximal assessments. We were able to compute three different effect sizes reflects reliability; mean differences, ICC and standard
error
of measurement of test-retest design, while for validity, we computer correlation coefficient between heart rate values and maximal aerobic fitness. Since both measurement properties (i.e reliability/validity) of heart
rate
can be analysed from different intensities during the assessment (for example, 70, 80 and 90% from heart rate maximum), different modalities
of
tests (e.g running, cycling), and multiple time points across the year
(e.g.
before season, in-season), one sample can have more than one effect
size.
I decided to employ three level meta-analysis, while level two and
three
pertaining to within and between samples variance, respectively. Then, include moderators effect within and between samples). Regarding the weights, the only reason I wonder if I need to adjust
them is
because the wide range of effect sizes per sample (1-4 per sample) and thought to use the approach you discussed in your recent post here. http://www.metafor-project.org/doku.php/tips:weights_in_rma.mv_models However, as I understand the default W in rma.mv will work quite well? With regards to the above (i.e multiple effect sizes for samples), I consider to add robust cluster test to get more accurate standard error values. As I understand, it may be a good option to control for the
natural
(unknown) correlations between effect sizes from the same sample. First, do you think it is necessary? If so, would you apply cluster
test
just to the overall model or for additional models including
moderators.
Second, Is it reasonable to report the results obtained from the
multilevel
and cluster analyses in the paper? Of note, my dataset isn?t large and includes between 15-20 samples (clusters) while around 50-60% have multiple effect sizes. With regards to the second question in the original email, we computer
the
standard error of measurement (usually attained from pooled SD of test- retest multiply the square root of 1-icc). Practically, these effect
sizes
are sd values. I haven?t seen enough meta-analysis studies using
standard
error of measurement as effect size and I speculate if you can suggest
me
what would be a decent approach for this? Cheers, On Thu, 6 Aug 2020 at 22:30, Viechtbauer, Wolfgang (SP) <wolfgang.viechtbauer at maastrichtuniversity.nl> wrote: Dear Tzlil, Unless you have good reasons to do so, do not use custom weights.
rma.mv()
uses weights and the default ones are usually fine. weights(res, type="rowsum") will only (currently) work in the 'devel' version of metafor, which you can install as described here: https://wviechtb.github.io/metafor/#installation I can't really comment on the second question, because answering this
would
require knowing all details of what is being computed/reported.
As for the last question ("is there a straightforward way in metafor to
specify the analysis with Chi-square values"): No, chi-square values
are
test statistics, not an effect size / outcome measure, so they cannot
be
used for a meta-analysis (at least not with metafor). Best, Wolfgang
-----Original Message----- From: R-sig-meta-analysis [mailto:r-sig-meta-analysis-bounces at r-
project.org]
On Behalf Of Tzlil Shushan Sent: Wednesday, 05 August, 2020 5:45 To: r-sig-meta-analysis at r-project.org Subject: [R-meta] Performing a multilevel meta-analysis Hi R legends! My name is Tzlil and I'm a PhD candidate in Sport Science - Human performance science and sports analytics I'm currently working on a multilevel meta-analysis using the metafor package. My first question is around the methods used to assign weights within
rma.mv
models. I'd like to know if there is a conventional or 'most conservative'
approach
to continue with. Since I haven't found a consistent methodology
within
the
multilevel meta-analyses papers I read, I originally applied a weight
which
pertains to variance (vi) and number of effect sizes from the same
study.
I
found this method in a lecture by Joshua R. Polanin https://www.youtube.com/watch?v=rJjeRRf23L8&t=1719s from 28:00. W = 1/vi, then divided by the number of ES for a study for example, a study with vi = 0.0402 and 2 different ES will weight
as
follow; 1/0.0402 = 24.88, then 24.88/2 = 12.44 (finally, converting into percentages based on the overall weights in the analysis) After I've read some of the great posts provided in last threads here
such
and
https://www.jepusto.com/weighting-in-multivariate-meta-analysis/ I wonder if it is not correct and I need to modify the way I use
weights
in
my model.. For some reason, I tried to imitate the approach used in the first
link
above. However, for some reason I get an error every time I tried to specify weights(res, type="rowsum") *Error in match.arg(type,
c("diagonal",
"matrix")) : 'arg' should be one of ?diagonal?, ?matrix?* My second question is related to the way I meta-analyse a specific
ES. My
meta-analysis involves the reliability and convergent validity of
heart
rate during a specific task, which is measured in relative values
(i.e.
percentages). Therefore, my meta-analysis includes four different ESs parameters (mean difference; MD, interclass correlation; ICC, standard error of measurement; SEM, and correlation coefficient; r). I wonder how I need to use SEM before starting the analysis. I've seen
some
papers which squared and log transformed the SEM before performing a meta-analysis, while others converted the SEM into CV%. Due to the
original
scale of our ES (which is already in percentages) I'd like to perform
the
analysis without converting it into CV% values. Should I use the SEM
as
the
reported values? only log transformed it? Further, is there a straightforward way in metafor to specify the analysis with
Chi-square
values (as "ZCOR" in correlations)? Thanks in advance! Kind regards, Tzlil Shushan | Sport Scientist, Physical Preparation Coach BEd Physical Education and Exercise Science MSc Exercise Science - High Performance Sports: Strength & Conditioning, CSCS PhD Candidate Human Performance Science & Sports Analytics
_______________________________________________ R-sig-meta-analysis mailing list R-sig-meta-analysis at r-project.org https://stat.ethz.ch/mailman/listinfo/r-sig-meta-analysis
--
Tzlil Shushan B.Ed. Physical Education and Exercise Science M.Sc. High Performance Sports: Strength & Conditioning, CSCS
Leaving aside that the SEM, as far as I understood your description of it, is not just a 'simple' standard deviation (i.e., it is computed in a different way) - yes, that is how you should specify the arguments for this outcome measure. Best, Wolfgang
-----Original Message----- From: Tzlil Shushan [mailto:tzlil21092 at gmail.com] Sent: Wednesday, 19 August, 2020 16:21 To: Fernando Klitzke Borszcz Cc: Viechtbauer, Wolfgang (SP); r-sig-meta-analysis at r-project.org Subject: Re: [R-meta] Performing a multilevel meta-analysis Dear Wolfgang and Fernando, Apologise for the multiple emails, but I just figured out that my last questions were probably unnecessary.. After I read this ?measures for quantitative variables? section? https://wviechtb.github.io/metafor/reference/escalc.html I finally understood that I probably need to specify the SEM values as sdi and sample size as ni in the model. res -> escalc(measure = ?SDLN?, sdi = sem, ni, data = dat) That?s right? Thanks and kind regards, On Wed, 19 Aug 2020 at 21:28, Tzlil Shushan <tzlil21092 at gmail.com> wrote: Dear Wolfgang and Fernando, Woflgang, thanks for letting?me know.. Fernando, thanks for your answer,?I wanted to have some time working with "SDLN" function you suggested before commenting again. I'm familiar with those papers that investigated SEM, thanks for sending them over. Since you already mentioned the "SDLN" function I have two questions; 1) If I want to proceed with log transformation of SEM effect sizes, Do I need to specify log() for the yi value? res?<- escalc(measure = "SDLN", yi = log(sem), vi , data = dat)? 2) Because it is hard to obtain the sampling variance for each individual study (some reported CI and some not), What function should I use to compute the sampling variance? is 1/(n-3) works fine in?this case? If I be able to compute?the estimated standard error from individual studies based on their confidence intervals: (CI upper - CI lower)/3.92 for 95% CI, then specify sei within the escalc function to compute the variance. Does this approach serve better estimation for the model? Kind regards, Tzlil Shushan |?Sport Scientist, Physical Preparation Coach BEd Physical Education and Exercise Science MSc Exercise Science - High Performance Sports: Strength & Conditioning,?CSCS PhD Candidate Human Performance Science & Sports Analytics
Dear Wolfgang, Yes, indeed. It's a computation of a number formed from the original within or between individuals SD. Therefore, I assume (as Fernando suggested) that the most reasonable method to choose is the log of the SD with bias correction, then using multilevel analysis with an extension of robust variance estimation (considering the structure of my dataset). Log-transformation was used in the recent studies analysed similar values. P.S: I know you've already said that you don't have clear suggestions in regards to this. Thanks for the assistance! Kind regards, Tzlil Shushan | Sport Scientist, Physical Preparation Coach BEd Physical Education and Exercise Science MSc Exercise Science - High Performance Sports: Strength & Conditioning, CSCS PhD Candidate Human Performance Science & Sports Analytics ??????? ??? ??, 20 ????? 2020 ?-21:06 ??? ?Viechtbauer, Wolfgang (SP)?? <? wolfgang.viechtbauer at maastrichtuniversity.nl??>:?
Leaving aside that the SEM, as far as I understood your description of it, is not just a 'simple' standard deviation (i.e., it is computed in a different way) - yes, that is how you should specify the arguments for this outcome measure. Best, Wolfgang
-----Original Message----- From: Tzlil Shushan [mailto:tzlil21092 at gmail.com] Sent: Wednesday, 19 August, 2020 16:21 To: Fernando Klitzke Borszcz Cc: Viechtbauer, Wolfgang (SP); r-sig-meta-analysis at r-project.org Subject: Re: [R-meta] Performing a multilevel meta-analysis Dear Wolfgang and Fernando, Apologise for the multiple emails, but I just figured out that my last questions were probably unnecessary.. After I read this ?measures for quantitative variables? section? https://wviechtb.github.io/metafor/reference/escalc.html I finally understood that I probably need to specify the SEM values as sdi and sample size as ni in the model. res -> escalc(measure = ?SDLN?, sdi = sem, ni, data = dat) That?s right? Thanks and kind regards, On Wed, 19 Aug 2020 at 21:28, Tzlil Shushan <tzlil21092 at gmail.com> wrote: Dear Wolfgang and Fernando, Woflgang, thanks for letting me know.. Fernando, thanks for your answer, I wanted to have some time working with "SDLN" function you suggested before commenting again. I'm familiar with those papers that investigated SEM, thanks for sending them over. Since you already mentioned the "SDLN" function I have two questions; 1) If I want to proceed with log transformation of SEM effect sizes, Do I need to specify log() for the yi value? res <- escalc(measure = "SDLN",
yi =
log(sem), vi , data = dat)? 2) Because it is hard to obtain the sampling variance for each individual study (some reported CI and some not), What function should I use to
compute
the sampling variance? is 1/(n-3) works fine in this case? If I be able to compute the estimated standard error from individual
studies
based on their confidence intervals: (CI upper - CI lower)/3.92 for 95%
CI,
then specify sei within the escalc function to compute the variance. Does this approach serve better estimation for the model? Kind regards, Tzlil Shushan | Sport Scientist, Physical Preparation Coach BEd Physical Education and Exercise Science MSc Exercise Science - High Performance Sports: Strength & Conditioning, CSCS PhD Candidate Human Performance Science & Sports Analytics