Skip to content
Prev 70626 / 398528 Next

how to invert the matrix with quite small eigenvalues

Maybe I should state more clear that I define b  to get the orthogonal
matrix bb$vectors.

We also can define diag(b)<-diag(b)+100, which will make the
eigenvalues of b much bigger to make sure the orthogonal matrix is
reliable.

My intention is to invert the covariance matrix to perform some
algorithm which is common in the estimating equations like GEE.

I meet difficulty to invert the covariance matrix. Two possibilities here:

1. The rounding error in defining the covariance matrix make the
eigenvalue to small.

2. The solve function in R can not cope with the matrix with so small
an eigenvalue.

For the first possibility, I think it can not be improved unless we
can  define more precise number than the double precision. So I ask
for the possiblity of coping with the second.

I can not find the default way to invert the matrix with solve().

For the symmetric matrix, I wonder if there are some algorithm which
can naturally make the inverse matrix symmetric and make sure it is
the inverse in the sense that the product is an identity matrix. I
know there are many decompositions which can be used to find the
inverse of a matrix. QR, SVD, Chol, and maybe some iterative method. I
wonder anyone can suggest me which algorithm might be good.

Another strange point is that my friend use the LU decomposition in
Fortran to solve the inverse matrix of aa for me. He used double
precision of course, otherwise no inverse matrix in Fortran too. He
show that the product of the two matrix is almost identity(The biggest
off-digonal element is about 1e-8). I copy his inverse matrix(with 31
digits!) to R and read in aa I sent to him(16 digits). The product is
also not an identity matrix. It is fairly strange! Any suggestion?

Regards,

Huang
On 5/30/05, Ted Harding <Ted.Harding at nessie.mcc.ac.uk> wrote: