Skip to content

SVD/Eigenvector confusion

7 messages · Philip Warner, Douglas Bates, Spencer Graves +1 more

#
My understanding of SVD is that, for A an mxn matrix, m > n:

     A = UWV*

where W is square root diagonal eigenvalues of A*A extended with zero 
valued rows, and U and V are the left & right eigen vectors of A. But this 
does not seem to be strictly true and seems to require specific 
eigenvectors, and I am not at all sure how these are computed.

Since W should have a zero row at the bottom, which when multiplied by U 
will just remove the last column of U, I have just omitted the last row of 
u from the outset:

eg, in R:

     a <- matrix(c(c(1,2,3),c(5,14,11)),3,2)
     u <- eigen(a %*% t(a))$vectors[,1:2]
     v <- eigen(t(a) %*% a)$vectors
     w <- sqrt(diag(eigen(t(a) %*% a)$values))
     u %*% w %*% t(v)

gives:
            [,1]       [,2]
[1,] -0.9390078  -5.011812
[2,] -3.3713773 -13.734403
[3,] -1.3236615 -11.324660

which seems a little off the mark. The value for v is:

           [,1]       [,2]
[1,] 0.1901389  0.9817572
[2,] 0.9817572 -0.1901389

Where as svd(a)$v is:

            [,1]       [,2]
[1,] -0.1901389  0.9817572
[2,] -0.9817572 -0.1901389

If I substitute this in the above, I get:

     u %*% w %*% t(svd(a)$v)

which returns:

      [,1] [,2]
[1,]    1    5
[2,]    2   14
[3,]    3   11

which is what the SVD should do. I assume there is some rule about setting 
the signs on eigenvectors for SVD, and would appreciate any help.



----------------------------------------------------------------
Philip Warner                    |     __---_____
Albatross Consulting Pty. Ltd.   |----/       -  \
(A.B.N. 75 008 659 498)          |          /(@)   ______---_
Tel: (+61) 0500 83 82 81         |                 _________  \
Fax: (+61) 03 5330 3172          |                 ___________ |
Http://www.rhyme.com.au          |                /           \|
                                  |    --________--
PGP key available upon request,  |  /
and from pgp.mit.edu:11371       |/
#
Philip Warner <pjw at rhyme.com.au> writes:
Eigenvectors are only known up to changes in sign.  If you want to be
more precise you can say that you determine a one-dimensional
eigenspace.  Generally we normalize the eigenvectors of a symmetric
matrix to have unit length but that still leaves you with two choices.

Is there a reason you are doing the SVD in such a complicated way?
Why not use the svd function directly?
#
At 01:01 AM 29/02/2004, Douglas Bates wrote:
I am using it to debug other code that is deigned to compute SVDs, so I 
actually want to understand the intermediate steps in constricting the SVD.

Nothing I have seen in the various books & net sources I have read seem to 
indicate that the eigenvectors for U & V have any special requirements 
other than being of unit length, but the experimentation in R seems to 
indicate otherwise, hence my confusion.
----------------------------------------------------------------
Philip Warner                    |     __---_____
Albatross Consulting Pty. Ltd.   |----/       -  \
(A.B.N. 75 008 659 498)          |          /(@)   ______---_
Tel: (+61) 0500 83 82 81         |                 _________  \
Fax: (+61) 03 5330 3172          |                 ___________ |
Http://www.rhyme.com.au          |                /           \|
                                  |    --________--
PGP key available upon request,  |  /
and from pgp.mit.edu:11371       |/
#
On Sun, 29 Feb 2004, Philip Warner wrote:

            
(A %*% t(A) is required, BTW.)  That is not the definition of the SVD.  
It is true that U are eigenvectors of A %*% t(A) and V of t(A) %*% A, but
that does not make them left/right eigenvectors of A (unless that is your
private definition).  Since eigenvectors are not unique, it does mean that
you cannot reverse the process, as you seem to be trying to do.

Eigenvectors are only defined up to a sign (and more if there are 
duplicate eigenvalues) and singular vectors are only defined up to a sign 
(changing both U and V).  You will find both vary by sign depending on the 
exact version of R used (including which BLAS and which compiler 
optimization level).  Singular vectors have unit length, but eigenvectors 
do not have to (although they do in the code you have used).
It is not expected to work.
There is no rule: the SVD is computed by a different algorithm.
#
The documentation ?svd says U and V are orthogonal, i.e., that the 
transpose is the inverse.  hope this helps.  spencer graves
Philip Warner wrote:

            
#
At 01:17 AM 29/02/2004, Prof Brian Ripley wrote:
Sorry, that should have read 'left & right singular vectors', and I'm 
beginning to suspect that they are only the starting point for deriving the 
singular vectors (based on 
http://www.cs.utk.edu/~dongarra/etemplates/node191.html)
...cut...
Maybe not by you... 8-}
So I assume my approach will not give me the singular vectors, and I need a 
different way of deriving them, is that right?


Thanks for your help, it is much appreciated.




----------------------------------------------------------------
Philip Warner                    |     __---_____
Albatross Consulting Pty. Ltd.   |----/       -  \
(A.B.N. 75 008 659 498)          |          /(@)   ______---_
Tel: (+61) 0500 83 82 81         |                 _________  \
Fax: (+61) 03 5330 3172          |                 ___________ |
Http://www.rhyme.com.au          |                /           \|
                                  |    --________--
PGP key available upon request,  |  /
and from pgp.mit.edu:11371       |/
#
On Sun, 29 Feb 2004, Philip Warner wrote:

            
I think there are ways to derive the correct signs, but your approach is a 
poor way to do the calculations as it squares the condition number of A.

There are standard algorithms for computing the SVD from A alone.