Skip to content

Running

2 messages · Ben Bolker, Jonathan Miller

#
[please keep r-sig-mixed-models in the Cc: if possible - although I
see it's a judgment call in this case because the e-mail contains both
generally pertinent info (uncertainty of FE small) and a personal-ish
message ...]

  Just to be clear, (1) I was suggesting that the uncertainty of the
fixed effects might be *large* with respect to the uncertainty of the
random effects, and largely independent of it; (2) have you already
tried implementing other (approximate, faster) methods for the
uncertainty on a small subset of the sites to convince yourself that you
really need the full PB method?
On 2018-11-09 6:28 p.m., Jonathan Miller wrote:
#
Ben,

I am sorry.  I did misunderstand your first email last night.  I am using
the glmm models for predicting water quality and my random effects are at
the site and basin level and they do explain a lot of the variance in the
models especially for "noisy" indicators like turbidity and fecal
coliform.  In the project, I am predicting for current conditions as well
as potential management scenarios throughout a region.  Initially, I just
calculate the mean difference between these two values (current vs.
management scenario) for the region, but I would like to get an idea of the
uncertainty in this mean reduction. Though the random effects are
significant, we are making an assumption that when trying to restore a
particular site, the random effect at that site will not change over the
course of the restoration. This implies that the uncertainty of improvement
for a given site is mostly affected by the uncertainty in the fixed effects
which are being adjusted for the management scenarios  (i.e., increase of
canopy cover, nutrient loadings from wastewater treatment plants, etc.). I
tried to use the predictInterval function, but it seemed to give me
predictive intervals including random effects as well. In essence, they
were much larger than the ones I am getting using :

## param only
b3 <- bootMer(fm1,FUN=function(x) predict(x,newdata=test,re.form=~0),
              ## re.form=~0 is equivalent to use.u=FALSE
              nsim=100,seed=101)

I also used Cholesky decomposition on the covariance matrix of the fixed
effects to "simulate" the uncertainty of the fixed effects giving similar
results.  I think bootstrapping is a bit easier to explain in my manuscript
though and thought it might also be easier for coding purposes using
bootmer.

It does seem to be working well, but my question was more on why using
parallel= "snow" isn't speeding things up, though maybe your concerns of me
having to do PB are right as well.

Thank you,

Jonathan
On Fri, Nov 9, 2018 at 6:44 PM Ben Bolker <bbolker at gmail.com> wrote: