I am fitting a simple linear mixed model to some abundance data
which are mean densities on the log10 scale for a set of spatial
cells with sample sizes used as prior weights. I define a random
cell intercept model and fit a linear year trend. I get very similar
estimates of the intercept and slope when using each of lmer(.) and
asreml(.) but get much smaller estimates of the SEs of these
parameters for lmer(.). This is due to a much smaller estimate of
the residual standard error with estimate of 0.534 for lmer and
3.016 for asreml with corresponding estimates of the cell-level
standard deviation of 0.0014 for lmer and 0.2151 (=0.04626^0.5) for
asreml. Comparing these to a simple lm(.) fit gives a similar but
slightly higher estimate of the residual standard error compared to
the asreml estimate with the a value of 3.116. The lmer estimate
appears to be orders of magnitude out. Am I interpreting these
results correctly? Has it something to do with how the weighting is
done!