Skip to content

Confidence interval for sum of coefficients

7 messages · lorenz.gygax at agroscope.admin.ch, Doogan, Nathan, Ben Bolker +1 more

#
Hello,

I suspect this to be simple, but I can't figure it out.
Fixed effects:
             Estimate Std. Error t value
(Intercept)   52.356      1.681  31.151
MachineB       7.967      2.421   3.291
MachineC      13.917      1.540   9.036
2.5 %     97.5 %
[...]
(Intercept) 48.7964047 55.9147119
MachineB     2.8401623 13.0931789
MachineC    10.6552809 17.1780575

[and 14 warnings, but it's just an example:
In optwrap(optimizer, par = start, fn = function(x) dd(mkpar(npar1,  ... 
:
convergence code 1 from bobyqa: bobyqa -- maximum number of function 
evaluations exceeded
...
In profile.merMod(object, signames = oldNames, ...) : non-monotonic 
profile]

I'd like to have confidence intervals for the overall score of MachineA, 
MachineB, and MachineB. MachineA is easy (CI of the intercept), but how 
do I combine the CI of the intercept with the CI of the MachineB 
parameter, and likewise the CI of the intercept with the parameter of 
MachineC? Can I simply add the lower and upper bounds of the two 
intervals or is this naive?

Thank you for your time,

Michael
#
Dear Michael,

This is possibly not what you should do. As has been recently suggested to me on this list, you could use bootMer with a function that computes the sum(s) that you are interested in. Then use boot.ci from the boot package to estimate the confidence intervals

Best wishes, Lorenz
#
Dear Lorenz,

ah, thank you for pointing this out, I really should have read the list 
more carefully. I will look into the relevant conversation.

Kind regards
Michael

Am 25.09.2014 14:35 schrieb lorenz.gygax at agroscope.admin.ch:
#
Dear list,

Lorenz has pointed out to me Ben's suggestion to bootstrap the sums (or 
any linear combiantion) of coefficients I'm interested in. This may be 
the general approach, but I struggle to see why it would be illegitimate 
to simply change the reference level for the treatment contrast coding, 
fit the model again and run confint() a second time (and do so again for 
MachineC):
Fixed effects:
             Estimate Std. Error t value
(Intercept)   60.322      3.529  17.096
MachineA      -7.967      2.421  -3.291
MachineC       5.950      2.446   2.432
(Intercept)  52.8500103 67.7944456
MachineA    -13.0931710 -2.8401544
MachineC      0.7692323 11.1307757

Now the CI of the intercept is the confidence interval for the overall 
score of MachineB. Adding lower and upper bounds from fm1 would have 
given somewhat similar, but somewhat wider intervals.
(I probably have a lack of understanding as to how CIs can be calculated 
with. Is there an inuitive explanation for why the bounds don't add?)
Fixed effects:
             Estimate Std. Error t value
(Intercept)   66.272      1.806   36.69
MachineB      -5.950      2.446   -2.43
MachineA     -13.917      1.540   -9.04
(Intercept)  62.4471752  70.0972752
MachineB    -11.1307677  -0.7692243
MachineA    -17.1780524 -10.6552759

Thanks, and best wishes
Michael

Am 25.09.2014 14:11 schrieb Michael Cone:
#
Is there an issue with using the variance sum law and the var-covar matrix  to sum two parameters and estimate the variance of the sum? i.e., add their variances and covariances as expressed in the variance covariance matrix of the parameter estimates, probably obtained with vcov(modelObj).

Or is this too simplistic for a mixed model?

-Nate

--
Nathan J. Doogan, Ph.D.  | College of Public Health
Post-Doctoral Researcher | The Ohio State University



-----Original Message-----
From: r-sig-mixed-models-bounces at r-project.org [mailto:r-sig-mixed-models-bounces at r-project.org] On Behalf Of Michael Cone
Sent: Thursday, September 25, 2014 9:26 AM
To: r-sig-mixed-models at r-project.org
Subject: Re: [R-sig-ME] Confidence interval for sum of coefficients

Dear list,

Lorenz has pointed out to me Ben's suggestion to bootstrap the sums (or any linear combiantion) of coefficients I'm interested in. This may be the general approach, but I struggle to see why it would be illegitimate to simply change the reference level for the treatment contrast coding, fit the model again and run confint() a second time (and do so again for
MachineC):
Fixed effects:
             Estimate Std. Error t value
(Intercept)   60.322      3.529  17.096
MachineA      -7.967      2.421  -3.291
MachineC       5.950      2.446   2.432
(Intercept)  52.8500103 67.7944456
MachineA    -13.0931710 -2.8401544
MachineC      0.7692323 11.1307757

Now the CI of the intercept is the confidence interval for the overall score of MachineB. Adding lower and upper bounds from fm1 would have given somewhat similar, but somewhat wider intervals.
(I probably have a lack of understanding as to how CIs can be calculated with. Is there an inuitive explanation for why the bounds don't add?)
Fixed effects:
             Estimate Std. Error t value
(Intercept)   66.272      1.806   36.69
MachineB      -5.950      2.446   -2.43
MachineA     -13.917      1.540   -9.04
(Intercept)  62.4471752  70.0972752
MachineB    -11.1307677  -0.7692243
MachineA    -17.1780524 -10.6552759

Thanks, and best wishes
Michael

Am 25.09.2014 14:11 schrieb Michael Cone:
_______________________________________________
R-sig-mixed-models at r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models
#
On 14-09-25 10:17 AM, Doogan, Nathan wrote:
That was exactly what I was going to suggest (but hadn't gotten around
to it).  It's slightly less accurate than parametric bootstrapping or
likelihood profiling (the former is computationally straightforward, the
latter would have to be implemented more or less from scratch), but
should be fine in many cases.

 To be more specific, if you have a linear combination of parameters in
mind (e.g. lincomb <- c(1,1,1) for adding all three parameters), you want

lincomb %*% vcov(fitted_model) %*% lincomb

(R should take care of the transposition where necessary, I think)
to get the variance.

By the way, I don't think it makes any sense at all to add confidence
intervals; as one example, imagine that two quantities have estimated
values of 1 and 2 with confidence intervals {-1,3} and {1,3}; should the
net confidence intervals actually be {0,6} ... ?  Or add many values
with lower bounds at zero -- should the joint lower bound really be
zero?  If you want to add something, add *variances* and convert to std
errors and from there to CIs ...
#
Dear Ben, Nathan,

thank you for the suggestions and explanations, that makes perfect 
sense. I come from a non-statistical background, and, lacking the 
basics, sometimes hit sort of a brick wall in my understanding.

Kind regards,
Michael
On 25.09.2014 16:26, Ben Bolker wrote: