generalized mixed linear models, glmmPQL and GLMER give very different results that both do not fit the data well...
On Fri, 18 Mar 2011, Franssens, Samuel wrote:
Hi,
I have the following type of data: 86 subjects in three independent
groups (high power vs low power vs control). Each subject solves 8
reasoning problems of two kinds: conflict problems and noconflict
problems. I measure accuracy in solving the reasoning problems. To
summarize: binary response, 1 within subject var (TYPE), 1 between
subject var (POWER).
I wanted to fit the following model: for problem i, person j:
logodds ( Y_ij ) = b_0j + b_1j TYPE_ij
with b_0j = b_00 + b_01 POWER_j + u_0j
and b_1j = b_10 + b_11 POWER_j
I think it makes sense, but I'm not sure.
Here are the observed cell means:
conflict noconflict
control 0.6896552 0.9568966
high 0.6935484 0.9677419
low 0.8846154 0.9903846
GLMER gives me:
Formula: accuracy ~ f_power * f_type + (1 | subject)
Data: syllogisms
Random effects:
Groups Name Variance Std.Dev.
subject (Intercept) 4.9968 2.2353
Number of obs: 688, groups: subject, 86
Fixed effects:
Estimate Std. Error z value Pr(>|z|)
(Intercept) 1.50745 0.50507 2.985 0.00284 **
f_powerhp 0.13083 0.70719 0.185 0.85323
f_powerlow 2.04121 0.85308 2.393 0.01672 *
f_typenoconflict 3.28715 0.64673 5.083 3.72e-07 ***
f_powerhp:f_typenoconflict 0.21680 0.93165 0.233 0.81599
f_powerlow:f_typenoconflict -0.01199 1.45807 -0.008 0.99344
---
Strange thing is that when you convert the estimates to probabilities,
they are quite far off. For control, no conflict (intercept), the
estimation from glmer is 1.5 -> 81% and for glmmPQL is 1.14 -> 75%,
whereas the observed is: 68%.
Am I doing something wrong?
You are forgetting that your model includes a random intercept for each subject.
| David Duffy (MBBS PhD) ,-_|\ | email: davidD at qimr.edu.au ph: INT+61+7+3362-0217 fax: -0101 / * | Epidemiology Unit, Queensland Institute of Medical Research \_,-._/ | 300 Herston Rd, Brisbane, Queensland 4029, Australia GPG 4D0B994A v