Skip to content
Prev 13594 / 20628 Next

Interpretation of GLMM output in R

Hi

Your message comes through with weird line breaks, you should turn off
the HTML compose option in your mail program, just write text.
On Fri, Jul 31, 2015 at 8:42 AM, Yvonne Hiller <yvonne.hiller at hotmail.de> wrote:
Your model assumes that the outcome is Poisson with expected value
exp(beta0 + beta1*parasitoids + btree)

btree is a unique added amount for each tree.  The estimate of the
number btree's variance across trees is 0.1.

What that 0.1 means in terms of the predicted outcome?  Well, that
mostly depends on how big beta0 + beta1*parasitoids is.  If that
number is huge, say 1000, then adding a thing with variance 0.1 won't
matter much.

On the other hand, if it is 0.01, then the random effect at the tree
level is very large, compared to the systematic components in your
model.  When the link function gets applied, the distribution of
outcomes changes in an interesting way.

If you run ranef(), it will spit out the estimates of the random
differences among trees (btree "BLUPS").  If you run the predict
method, you can see how those map out to predicted values (exp(beta0 +
beta1 parasitoids + btree)
I am puzzled why you see p values at all. In the version of lme4 I'm
running now, I don't see p values.

Lets compare versions, since I'm pretty sure p values were removed
quite a while ago.
R version 3.1.2 (2014-10-31)
Platform: x86_64-pc-linux-gnu (64-bit)

locale:
 [1] LC_CTYPE=en_US.UTF-8       LC_NUMERIC=C
 [3] LC_TIME=en_US.UTF-8        LC_COLLATE=en_US.UTF-8
 [5] LC_MONETARY=en_US.UTF-8    LC_MESSAGES=en_US.UTF-8
 [7] LC_PAPER=en_US.UTF-8       LC_NAME=C
 [9] LC_ADDRESS=C               LC_TELEPHONE=C
[11] LC_MEASUREMENT=en_US.UTF-8 LC_IDENTIFICATION=C

attached base packages:
[1] stats     graphics  grDevices utils     datasets  methods   base

other attached packages:
[1] lme4_1.1-8   Matrix_1.2-2

loaded via a namespace (and not attached):
[1] grid_3.1.2      lattice_0.20-33 MASS_7.3-43     minqa_1.2.4
[5] nlme_3.1-121    nloptr_1.0.4    Rcpp_0.12.0     splines_3.1.2
[9] tools_3.1.2

Anyway...

If you had a huge sample, those p values would be accurate.

You have a small sample, there are other, more computationally
intensive ways, to get p values.  Read the Jrnl Stat Software paper b
y the lmer team, they describe profiling and bootstrapping. You have
small enough sample, could do either one.
It is a hint about multicollinearity & numerical instability, so far as I know.
Best way to get answer is to plot the predicted values from the model.
Use the predict function to plot for various values of the predictor.
Only if you think the term "effect size" is meaningful.  And if you
have a formula for one.  In my experience with consulting here, it
means anything the researcher wants to call a summary number.

I've come to loathe the term because somebody in the US Dept. of
Education mandated all studies report standardized effect sizes,
forcing everybody to make Herculean assumptions about all kinds of
model parameters to get Cohen's D or whatnot.
Good luck.  Next time, use a text only email composer and try to ask 1
specific question. You are more likely to get attention if people can
easily read the message and see what you want.  This one was difficult
to read (for me at least) and also somewhat vague.