Skip to content

proportional weights

10 messages · Marco, Göran Broström, Peter Dalgaard +1 more

#
Hello all, can help clarify something?

According to R's lm() doc:
Since the idea here is *proportion*, not equality, shouldn't the vectors 
of weights x, 2*x give the same result? And yet they don't, standard 
errors differs:
So what if I know a-priori, observation A has variance 2 times bigger 
than observation B? Both weights=c(1,2) and weights=c(2,4) (and so on) 
represent very well this knowledge, but we get different regression 
(since sigma is different).


Also, if we do the same thing with a glm() model, than we get a lot of 
other differences like in the deviance.
#
On 05/02/14 22:40, Marco Inacio wrote:
The weights are in fact case weights, i.e., a weight of 2 is the same as 
including the corresponding item twice. I agree that the documentation 
is no wonder of clarity in this respect.

Btw, note that, in your example, (0.1005269 / 0.07108323)^2 = 2, your 
constant weight.

G?ran Brostr?m
#
Dear Marco and Goran,

Perhaps the documentation could be clearer, but it is after all a brief help page. Using weights of 2 to lm() is *not* equivalent to entering the observation twice. The weights are variance weights, not case weights.

You can see this by looking at the whole summary() output for the models, not just the residual standard errors:

------------- snip ---------
Call:
lm(formula = c(1, 2, 3, 1, 2, 3) ~ c(1, 2.1, 2.9, 1.1, 2, 3), 
    weights = rep(1, 6))

Residuals:
       1        2        3        4        5        6 
 0.06477 -0.08728  0.07487 -0.03996  0.01746 -0.02986 

Coefficients:
                          Estimate Std. Error t value Pr(>|t|)    
(Intercept)               -0.11208    0.08066   -1.39    0.237    
c(1, 2.1, 2.9, 1.1, 2, 3)  1.04731    0.03732   28.07 9.59e-06 ***
---
Signif. codes:  0 ?***? 0.001 ?**? 0.01 ?*? 0.05 ?.? 0.1 ? ? 1

Residual standard error: 0.07108 on 4 degrees of freedom
Multiple R-squared:  0.9949,	Adjusted R-squared:  0.9937 
F-statistic: 787.6 on 1 and 4 DF,  p-value: 9.59e-06
Call:
lm(formula = c(1, 2, 3, 1, 2, 3) ~ c(1, 2.1, 2.9, 1.1, 2, 3), 
    weights = rep(2, 6))

Residuals:
       1        2        3        4        5        6 
 0.09160 -0.12343  0.10589 -0.05652  0.02469 -0.04223 

Coefficients:
                          Estimate Std. Error t value Pr(>|t|)    
(Intercept)               -0.11208    0.08066   -1.39    0.237    
c(1, 2.1, 2.9, 1.1, 2, 3)  1.04731    0.03732   28.07 9.59e-06 ***
---
Signif. codes:  0 ?***? 0.001 ?**? 0.01 ?*? 0.05 ?.? 0.1 ? ? 1

Residual standard error: 0.1005 on 4 degrees of freedom
Multiple R-squared:  0.9949,	Adjusted R-squared:  0.9937 
F-statistic: 787.6 on 1 and 4 DF,  p-value: 9.59e-06

------------- snip -------------

Notice that while the residual standard errors differ, the coefficients and their standard errors are identical. There are compensating changes in the residual variance and the weighted sum of squares and products matrix for X. 

In contrast, literally entering each observation twice reduces the coefficient standard errors by a factor of sqrt((6 - 2)/(12 - 2)), i.e., the square root of the relative residual df of the models:

------------- snip --------
Call:
lm(formula = rep(c(1, 2, 3, 1, 2, 3), 2) ~ rep(c(1, 2.1, 2.9, 
    1.1, 2, 3), 2))

Residuals:
      Min        1Q    Median        3Q       Max 
-0.087276 -0.039963 -0.006201  0.064768  0.074874 

Coefficients:
                                  Estimate Std. Error t value Pr(>|t|)
(Intercept)                       -0.11208    0.05101  -2.197   0.0527
rep(c(1, 2.1, 2.9, 1.1, 2, 3), 2)  1.04731    0.02360  44.374 8.12e-13
                                     
(Intercept)                       .  
rep(c(1, 2.1, 2.9, 1.1, 2, 3), 2) ***
---
Signif. codes:  0 ?***? 0.001 ?**? 0.01 ?*? 0.05 ?.? 0.1 ? ? 1

Residual standard error: 0.06358 on 10 degrees of freedom
Multiple R-squared:  0.9949,	Adjusted R-squared:  0.9944 
F-statistic:  1969 on 1 and 10 DF,  p-value: 8.122e-13

---------- snip -------------

I hope this helps,

John

------------------------------------------------
John Fox, Professor
McMaster University
Hamilton, Ontario, Canada
http://socserv.mcmaster.ca/jfox/
	
	
On Thu, 6 Feb 2014 09:27:22 +0100
G?ran Brostr?m <goran.brostrom at umu.se> wrote:
#
Dear John,

thanks for the clarification! The lesson to be learned is that one 
should be aware of the fact that weights may mean different things in 
different functions, and sometimes different things in the same function 
(glm)!

G?ran
On 02/06/2014 02:17 PM, John Fox wrote:
[1] 0.07108323
[1] 0.1005269
#
Thanks for the answers.
According to your post here:
   http://tolstoy.newcastle.edu.au/R/e2/help/07/05/16311.html
   there are 3 possible kinds of weights.

The person in this one:
   http://tolstoy.newcastle.edu.au/R/e2/help/07/06/18743.html
   includes 2 others making a distinction between weights inverse 
proportional to variance and weight equal to inverse variance.

(looking at other posts in the thread shows that other people also make 
confusions on this matter)

So R's lm(), glm(), etc weights **are** the inverse of the variance of 
the observations, right?
They'are not **proportional** to the inverse of variance because if this 
were true, then weight and 2*weight would archive the same results, right?


I needed a method to use proportional weights on observations as I know 
their proportion of variance among each other.
And it doesn't need to be a R function, just an explanation on how 
construct the likehood would be fine. If anybody know an article on the 
subject, would be of great help to.
#
Dear Marco,

What I said in the 2007 r-help posting to which you refer is, "The weights
used by lm() are (inverse-)'variance weights,' reflecting the variances of
the errors, with observations that have low-variance errors therefore being
accorded greater weight in the resulting WLS regression." ?lm says,
"Non-NULL weights can be used to indicate that different observations have
different variances (with the values in weights being inversely proportional
to the variances)."

If I understand your situation correctly, you know the error variances up to
a constant of proportionality, in which case you can set the weights
argument to lm() to the inverses of these values. As I showed you in the
example I just posted, weight and 2*weight *do* produce the same coefficient
estimates and standard errors, with the difference between the two absorbed
by the residual standard error, which is the square-root of the estimated
constant of proportionality.

If this is insufficiently clear, I'm afraid that I'll have to defer to
someone with greater powers of explanation.

Best,
 John
#
I think we can blame Tim Hesterberg for the confusion:

He writes

"
I'll add: 
* inverse-variance weights, where var(y for observation) = 1/weight   (as opposed to just being inversely proportional to the weight) *
"

And, although I'm not a native English speaker, I think there's a spurious comma in there. The intention was clearly to have this as a  4th type of weight which is a special case of inverse-variance weights, not as an elaboration on the definition of inv.var. weights.

I.e., it is the difference between

Motorists who are reckless drivers...

and

Motorists, who are reckless drivers...

-pd
On 06 Feb 2014, at 16:04 , John Fox <jfox at mcmaster.ca> wrote:

            

  
    
#
In fact, that wasn't what caused the confusion as I have understood what 
he meant despite the problem with the comma.

But I got the idea now, R uses weighted least squares and:

"Var[\epsilon | X] = \Omega" (equal, not proportion)
(https://en.wikipedia.org/wiki/Generalized_least_squares) (since WLS is 
just a special case of GLS)

"The weights should, ideally, be equal to the reciprocal of the variance 
of the measurement."
(https://en.wikipedia.org/wiki/Linear_least_squares_(mathematics)#Weighted_linear_least_squares)

I guess I need to find another strategy to use proportional weights 
(weights know up to a constant, as John says).

So, thank you much to you all, and sorry the inconvenience I caused.
#
Dear Marco,
No, you are perfectly fine using WLS. The constant of proportionality is the
estimated error variance, i.e., the square of the residual standard error
(as I think I said earlier).

John
#
You're right. That was a little hard for me to grasp. Thanks for the 
patience.