I think the bottom line can be summarized as follows:
1. Give up on Cholesky factors unless you have a matrix
you know must be symmetric and strictly positive definite. (I seem to
recall having had problems with chol even with matrices that were
theoretically positive or nonnegative definite but were not because of
round off. However, I can't produce an example right now, so I'm not
sure of that.)
2. If you must test whether a matrix is summetric, try
all.equal(A, t(A)). From the discussion, I had the impression that
this might not always do what you want, but it should be better than
all(A==t(A)). It is better still to decide from theory whether the
matrix should be symmetric.
3. Work with the Ae = eigen(A, symmetric=TRUE) or
eigen((A+t(A))/2, symmetric=TRUE). From here, Ae$values <-
pmax(Ae$values, 0) ensures that A will be positive semidefinite (aka
nonnegative definite). If it must be positive definite, use Ae$values
<- pmax(Ae$values, eps), with eps>0 chosen to make it as positive
definite as you want.
4. To the maximum extent feasible, work with Ae, not A.
Prof. Ripley noted, "You can then work with [this] factorization to
ensure that (for example) variances are always non-negative because
they are always computed as sums of squares. This sort of thing is
done in many of the multivariate analysis calculations in R (e.g.
cmdscale) and in well-designed packages."
Hope this helps.
Spencer
On 1/30/2011 3:02 AM, Alex Smith wrote:
Thank you for all your input but I'm afraid I dont know what the final
conclusion is. I will have to check the the eigenvalues if any are negative.
Why would setting them to zero make a difference? Sorry to drag this
Thanks
On Sat, Jan 29, 2011 at 9:00 PM, Prof Brian Ripley<ripley at stats.ox.ac.uk>wrote:
On Sat, 29 Jan 2011, David Winsemius wrote:
On Jan 29, 2011, at 12:17 PM, Prof Brian Ripley wrote:
On Sat, 29 Jan 2011, David Winsemius wrote:
On Jan 29, 2011, at 10:11 AM, David Winsemius wrote:
On Jan 29, 2011, at 9:59 AM, John Fox wrote:
Dear David and Alex,
I'd be a little careful about testing exact equality as in
t(M) and
careful as well about a test such as all(eigen(M)$values> 0) since
real
arithmetic on a computer can't be counted on to be exact.
Which was why I pointed to that thread from 2005 and the
that had been put into packages. If you want to substitute
all, there might be fewer numerical false alarms, but I would
could be other potential problems that might deserve warnings.
In addition to the two "is." functions cited earlier there is
"posdefify" function by Maechler in the sfsmisc package: "
From a matrix m, construct a "close" positive definite one."
But again, that is not usually what you want. There is no
the result is positive-definite enough that the Cholesky
I don't see a Cholesky decomposition method being used in that function.
It appears to my reading to be following what would be called an
eigendecomposition.
Correct, but my point is that one does not usually want a
"close" positive definite one
but a 'square root'.
--
David.
Give up on Cholesky factors unless you have a matrix you know
symmetric and strictly positive definite, and use the eigendecomposition
instead (setting negative eigenvalues to zero). You can then work
factorization to ensure that (for example) variances are always non-negative
because they are always computed as sums of squares.
This sort of thing is done in many of the multivariate analysis
calculations in R (e.g. cmdscale) and in well-designed packages.
On Jan 29, 2011, at 7:58 AM, David Winsemius wrote:
On Jan 29, 2011, at 7:22 AM, Alex Smith wrote:
Hello I am trying to determine wether a given matrix is symmetric
and
positive matrix. The matrix has real valued elements.
I have been reading about the cholesky method and another
to find the eigenvalues. I cant understand how to implement
of
the two. Can someone point me to the right direction. I have
?chol to see the help but if the matrix is not positive
comes up as error. I know how to the get the eigenvalues but
can
I then put this into a program to check them as the just
with
$values.
Is checking that the eigenvalues are positive enough to determine
wether the matrix is positive definite?
That is a fairly simple linear algebra fact that googling or
out a standard reference should have confirmed.
Just to be clear (since on the basis of some off-line communications
it
did not seem to be clear): A real, symmetric matrix is Hermitian
(and
therefore all of its eigenvalues are real). Further, it is positive-
definite if and only if its eigenvalues are all positive.
qwe<-c(2,-1,0,-1,2,-1,0,1,2)
q<-matrix(qwe,nrow=3)
isPosDef<- function(M) { if ( all(M == t(M) ) ) { # first test
symmetric-ity
if ( all(eigen(M)$values> 0) ) {TRUE}
else {FALSE} } #
else {FALSE} # not symmetric
}
[,1] [,2] [,3] [,4] [,5]
[1,] 1.0 0.0 0.5 -0.3 0.2
[2,] 0.0 1.0 0.1 0.0 0.0
[3,] 0.5 0.1 1.0 0.3 0.7
[4,] -0.3 0.0 0.3 1.0 0.4
[5,] 0.2 0.0 0.7 0.4 1.0
[1] TRUE
You might want to look at prior postings by people more knowledgeable
than
me:
Or look at what are probably better solutions in available packages:
e.html
--
David.
this is the matrix that I know is positive definite.
eigen(m)
$values
[1] 2.0654025 1.3391291 1.0027378 0.3956079 0.1971228
$vectors
[,1] [,2] [,3] [,4] [,5]
[1,] -0.32843233 0.69840166 0.080549876 0.44379474 0.44824689
[2,] -0.06080335 0.03564769 -0.993062427 -0.01474690 0.09296096
[3,] -0.64780034 0.12089168 -0.027187620 0.08912912 -0.74636235
[4,] -0.31765040 -0.68827876 0.007856812 0.60775962 0.23651023
[5,] -0.60653780 -0.15040584 0.080856897 -0.65231358 0.42123526
and this are the eigenvalues and eigenvectors.
I thought of using
eigen(m,only.values=T)
$values
[1] 2.0654025 1.3391291 1.0027378 0.3956079 0.1971228
$vectors
NULL
m<- matrix(scan(textConnection("
1.0 0.0 0.5 -0.3 0.2
0.0 1.0 0.1 0.0 0.0
0.5 0.1 1.0 0.3 0.7
-0.3 0.0 0.3 1.0 0.4
0.2 0.0 0.7 0.4 1.0
")), 5, byrow=TRUE)
#Read 25 items
[,1] [,2] [,3] [,4] [,5]
[1,] 1.0 0.0 0.5 -0.3 0.2
[2,] 0.0 1.0 0.1 0.0 0.0
[3,] 0.5 0.1 1.0 0.3 0.7
[4,] -0.3 0.0 0.3 1.0 0.4
[5,] 0.2 0.0 0.7 0.4 1.0
all( eigen(m)$values>0 )
#[1] TRUE
Then i thought of using logical expression to determine if there
are
negative eigenvalues but couldnt work. I dont know what
is
b<-(a<0)
Error: (list) object cannot be coerced to type 'double'
??? where did "a" and "b" come from?