Skip to content

Clustered data with Design package--bootcov() vs. robcov()

2 messages · jjh21, Frank E Harrell Jr

#
Another question related to bootcov():

A reviewer is concerned with the fact that bootstrapping the standard errors
does not give the same answers each time. What is a good way to address this
concern? Could I bootstrap, say, 100 times and report the mean standard
error of those 100 estimates? I am already doing 1,000 replications in the
bootstrap, but of course the answer is still slightly different each time.
Frank E Harrell Jr wrote:

  
    
#
jjh21 wrote:
First, you can argue that everything we estimate has a margin of error 
and that the variation across different runs of the bootstrap is within 
the statistical precision of what can be estimated.  Second, run the 
bootstrap with 10,000 replications and be done with it.

Frank