An embedded and charset-unspecified text was scrubbed... Name: not available URL: <https://stat.ethz.ch/pipermail/r-help/attachments/20110129/c62c1ccb/attachment.pl>
Positive Definite Matrix
19 messages · David Winsemius, John Fox, Alex Smith +5 more
On Jan 29, 2011, at 7:22 AM, Alex Smith wrote:
Hello I am trying to determine wether a given matrix is symmetric and positive matrix. The matrix has real valued elements. I have been reading about the cholesky method and another method is to find the eigenvalues. I cant understand how to implement either of the two. Can someone point me to the right direction. I have used ?chol to see the help but if the matrix is not positive definite it comes up as error. I know how to the get the eigenvalues but how can I then put this into a program to check them as the just come up with $values. Is checking that the eigenvalues are positive enough to determine wether the matrix is positive definite?
That is a fairly simple linear algebra fact that googling or pulling out a standard reference should have confirmed.
m
[,1] [,2] [,3] [,4] [,5]
[1,] 1.0 0.0 0.5 -0.3 0.2
[2,] 0.0 1.0 0.1 0.0 0.0
[3,] 0.5 0.1 1.0 0.3 0.7
[4,] -0.3 0.0 0.3 1.0 0.4
[5,] 0.2 0.0 0.7 0.4 1.0
this is the matrix that I know is positive definite.
eigen(m)
$values
[1] 2.0654025 1.3391291 1.0027378 0.3956079 0.1971228
$vectors
[,1] [,2] [,3] [,4] [,5]
[1,] -0.32843233 0.69840166 0.080549876 0.44379474 0.44824689
[2,] -0.06080335 0.03564769 -0.993062427 -0.01474690 0.09296096
[3,] -0.64780034 0.12089168 -0.027187620 0.08912912 -0.74636235
[4,] -0.31765040 -0.68827876 0.007856812 0.60775962 0.23651023
[5,] -0.60653780 -0.15040584 0.080856897 -0.65231358 0.42123526
and this are the eigenvalues and eigenvectors.
I thought of using
eigen(m,only.values=T)
$values
[1] 2.0654025 1.3391291 1.0027378 0.3956079 0.1971228
$vectors
NULL
> m <- matrix(scan(textConnection("
1.0 0.0 0.5 -0.3 0.2
0.0 1.0 0.1 0.0 0.0
0.5 0.1 1.0 0.3 0.7
-0.3 0.0 0.3 1.0 0.4
0.2 0.0 0.7 0.4 1.0
")), 5, byrow=TRUE)
#Read 25 items
> m
[,1] [,2] [,3] [,4] [,5]
[1,] 1.0 0.0 0.5 -0.3 0.2
[2,] 0.0 1.0 0.1 0.0 0.0
[3,] 0.5 0.1 1.0 0.3 0.7
[4,] -0.3 0.0 0.3 1.0 0.4
[5,] 0.2 0.0 0.7 0.4 1.0
all( eigen(m)$values >0 )
#[1] TRUE
Then i thought of using logical expression to determine if there are negative eigenvalues but couldnt work. I dont know what error this is b<-(a<0) Error: (list) object cannot be coerced to type 'double'
??? where did "a" and "b" come from?
David Winsemius, MD West Hartford, CT
On Jan 29, 2011, at 7:58 AM, David Winsemius wrote:
On Jan 29, 2011, at 7:22 AM, Alex Smith wrote:
Hello I am trying to determine wether a given matrix is symmetric and positive matrix. The matrix has real valued elements. I have been reading about the cholesky method and another method is to find the eigenvalues. I cant understand how to implement either of the two. Can someone point me to the right direction. I have used ?chol to see the help but if the matrix is not positive definite it comes up as error. I know how to the get the eigenvalues but how can I then put this into a program to check them as the just come up with $values. Is checking that the eigenvalues are positive enough to determine wether the matrix is positive definite?
That is a fairly simple linear algebra fact that googling or pulling out a standard reference should have confirmed.
Just to be clear (since on the basis of some off-line communications
it did not seem to be clear): A real, symmetric matrix is Hermitian
(and therefore all of its eigenvalues are real). Further, it is
positive-definite if and only if its eigenvalues are all positive.
qwe<-c(2,-1,0,-1,2,-1,0,1,2)
q<-matrix(qwe,nrow=3)
isPosDef <- function(M) { if ( all(M == t(M) ) ) { # first test
symmetric-ity
if ( all(eigen(M)$values > 0) ) {TRUE}
else {FALSE} } #
else {FALSE} # not symmetric
}
> isPosDef(q)
[1] FALSE
m [,1] [,2] [,3] [,4] [,5] [1,] 1.0 0.0 0.5 -0.3 0.2 [2,] 0.0 1.0 0.1 0.0 0.0 [3,] 0.5 0.1 1.0 0.3 0.7 [4,] -0.3 0.0 0.3 1.0 0.4 [5,] 0.2 0.0 0.7 0.4 1.0
> isPosDef(m) [1] TRUE You might want to look at prior postings by people more knowledgeable than me: http://finzi.psych.upenn.edu/R/Rhelp02/archive/57794.html Or look at what are probably better solutions in available packages: http://finzi.psych.upenn.edu/R/library/corpcor/html/rank.condition.html http://finzi.psych.upenn.edu/R/library/matrixcalc/html/is.positive.definite.html
David.
>>
>> this is the matrix that I know is positive definite.
>>
>> eigen(m)
>> $values
>> [1] 2.0654025 1.3391291 1.0027378 0.3956079 0.1971228
>>
>> $vectors
>> [,1] [,2] [,3] [,4] [,5]
>> [1,] -0.32843233 0.69840166 0.080549876 0.44379474 0.44824689
>> [2,] -0.06080335 0.03564769 -0.993062427 -0.01474690 0.09296096
>> [3,] -0.64780034 0.12089168 -0.027187620 0.08912912 -0.74636235
>> [4,] -0.31765040 -0.68827876 0.007856812 0.60775962 0.23651023
>> [5,] -0.60653780 -0.15040584 0.080856897 -0.65231358 0.42123526
>>
>> and this are the eigenvalues and eigenvectors.
>> I thought of using
>> eigen(m,only.values=T)
>> $values
>> [1] 2.0654025 1.3391291 1.0027378 0.3956079 0.1971228
>>
>> $vectors
>> NULL
>>
>
> > m <- matrix(scan(textConnection("
> 1.0 0.0 0.5 -0.3 0.2
> 0.0 1.0 0.1 0.0 0.0
> 0.5 0.1 1.0 0.3 0.7
> -0.3 0.0 0.3 1.0 0.4
> 0.2 0.0 0.7 0.4 1.0
> ")), 5, byrow=TRUE)
> #Read 25 items
> > m
> [,1] [,2] [,3] [,4] [,5]
> [1,] 1.0 0.0 0.5 -0.3 0.2
> [2,] 0.0 1.0 0.1 0.0 0.0
> [3,] 0.5 0.1 1.0 0.3 0.7
> [4,] -0.3 0.0 0.3 1.0 0.4
> [5,] 0.2 0.0 0.7 0.4 1.0
>
> all( eigen(m)$values >0 )
> #[1] TRUE
>
>> Then i thought of using logical expression to determine if there are
>> negative eigenvalues but couldnt work. I dont know what error this is
>>
>> b<-(a<0)
>> Error: (list) object cannot be coerced to type 'double'
>
> ??? where did "a" and "b" come from?
>
>>
>
>
> David Winsemius, MD
> West Hartford, CT
>
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
David Winsemius, MD
West Hartford, CT
Dear David and Alex, I'd be a little careful about testing exact equality as in all(M == t(M) and careful as well about a test such as all(eigen(M)$values > 0) since real arithmetic on a computer can't be counted on to be exact. Best, John -------------------------------- John Fox Senator William McMaster Professor of Social Statistics Department of Sociology McMaster University Hamilton, Ontario, Canada http://socserv.mcmaster.ca/jfox
-----Original Message----- From: r-help-bounces at r-project.org [mailto:r-help-bounces at r-project.org] On Behalf Of David Winsemius Sent: January-29-11 9:46 AM To: Alex Smith Cc: r-help at r-project.org Help Subject: Re: [R] Positive Definite Matrix On Jan 29, 2011, at 7:58 AM, David Winsemius wrote:
On Jan 29, 2011, at 7:22 AM, Alex Smith wrote:
Hello I am trying to determine wether a given matrix is symmetric and positive matrix. The matrix has real valued elements. I have been reading about the cholesky method and another method is to find the eigenvalues. I cant understand how to implement either of the two. Can someone point me to the right direction. I have used ?chol to see the help but if the matrix is not positive definite it comes up as error. I know how to the get the eigenvalues but how can I then put this into a program to check them as the just come up with $values. Is checking that the eigenvalues are positive enough to determine wether the matrix is positive definite?
That is a fairly simple linear algebra fact that googling or pulling out a standard reference should have confirmed.
Just to be clear (since on the basis of some off-line communications it
did not seem to be clear): A real, symmetric matrix is Hermitian (and
therefore all of its eigenvalues are real). Further, it is positive-
definite if and only if its eigenvalues are all positive.
qwe<-c(2,-1,0,-1,2,-1,0,1,2)
q<-matrix(qwe,nrow=3)
isPosDef <- function(M) { if ( all(M == t(M) ) ) { # first test
symmetric-ity
if ( all(eigen(M)$values > 0) ) {TRUE}
else {FALSE} } #
else {FALSE} # not symmetric
}
> isPosDef(q)
[1] FALSE
m [,1] [,2] [,3] [,4] [,5] [1,] 1.0 0.0 0.5 -0.3 0.2 [2,] 0.0 1.0 0.1 0.0 0.0 [3,] 0.5 0.1 1.0 0.3 0.7 [4,] -0.3 0.0 0.3 1.0 0.4 [5,] 0.2 0.0 0.7 0.4 1.0
> isPosDef(m)
[1] TRUE You might want to look at prior postings by people more knowledgeable than me: http://finzi.psych.upenn.edu/R/Rhelp02/archive/57794.html Or look at what are probably better solutions in available packages: http://finzi.psych.upenn.edu/R/library/corpcor/html/rank.condition.html http://finzi.psych.upenn.edu/R/library/matrixcalc/html/is.positive.definit e.html -- David.
this is the matrix that I know is positive definite.
eigen(m)
$values
[1] 2.0654025 1.3391291 1.0027378 0.3956079 0.1971228
$vectors
[,1] [,2] [,3] [,4] [,5]
[1,] -0.32843233 0.69840166 0.080549876 0.44379474 0.44824689
[2,] -0.06080335 0.03564769 -0.993062427 -0.01474690 0.09296096
[3,] -0.64780034 0.12089168 -0.027187620 0.08912912 -0.74636235
[4,] -0.31765040 -0.68827876 0.007856812 0.60775962 0.23651023
[5,] -0.60653780 -0.15040584 0.080856897 -0.65231358 0.42123526
and this are the eigenvalues and eigenvectors.
I thought of using
eigen(m,only.values=T)
$values
[1] 2.0654025 1.3391291 1.0027378 0.3956079 0.1971228
$vectors
NULL
m <- matrix(scan(textConnection("
1.0 0.0 0.5 -0.3 0.2 0.0 1.0 0.1 0.0 0.0 0.5 0.1 1.0 0.3 0.7 -0.3 0.0 0.3 1.0 0.4 0.2 0.0 0.7 0.4 1.0 ")), 5, byrow=TRUE) #Read 25 items
m
[,1] [,2] [,3] [,4] [,5] [1,] 1.0 0.0 0.5 -0.3 0.2 [2,] 0.0 1.0 0.1 0.0 0.0 [3,] 0.5 0.1 1.0 0.3 0.7 [4,] -0.3 0.0 0.3 1.0 0.4 [5,] 0.2 0.0 0.7 0.4 1.0 all( eigen(m)$values >0 ) #[1] TRUE
Then i thought of using logical expression to determine if there are negative eigenvalues but couldnt work. I dont know what error this is b<-(a<0) Error: (list) object cannot be coerced to type 'double'
??? where did "a" and "b" come from?
David Winsemius, MD West Hartford, CT
______________________________________________ R-help at r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
David Winsemius, MD West Hartford, CT
______________________________________________ R-help at r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting- guide.html and provide commented, minimal, self-contained, reproducible code.
An embedded and charset-unspecified text was scrubbed... Name: not available URL: <https://stat.ethz.ch/pipermail/r-help/attachments/20110129/00e5cc39/attachment.pl>
On Jan 29, 2011, at 9:59 AM, John Fox wrote:
Dear David and Alex, I'd be a little careful about testing exact equality as in all(M == t(M) and careful as well about a test such as all(eigen(M)$values > 0) since real arithmetic on a computer can't be counted on to be exact.
Which was why I pointed to that thread from 2005 and the existing work that had been put into packages. If you want to substitute all.equal for all, there might be fewer numerical false alarms, but I would think there could be other potential problems that might deserve warnings.
Best, John -------------------------------- John Fox Senator William McMaster Professor of Social Statistics Department of Sociology McMaster University Hamilton, Ontario, Canada http://socserv.mcmaster.ca/jfox
-----Original Message----- From: r-help-bounces at r-project.org [mailto:r-help-bounces at r-project.org ] On Behalf Of David Winsemius Sent: January-29-11 9:46 AM To: Alex Smith Cc: r-help at r-project.org Help Subject: Re: [R] Positive Definite Matrix On Jan 29, 2011, at 7:58 AM, David Winsemius wrote:
On Jan 29, 2011, at 7:22 AM, Alex Smith wrote:
Hello I am trying to determine wether a given matrix is symmetric and positive matrix. The matrix has real valued elements. I have been reading about the cholesky method and another method is to find the eigenvalues. I cant understand how to implement either of the two. Can someone point me to the right direction. I have used ?chol to see the help but if the matrix is not positive definite it comes up as error. I know how to the get the eigenvalues but how can I then put this into a program to check them as the just come up with $values. Is checking that the eigenvalues are positive enough to determine wether the matrix is positive definite?
That is a fairly simple linear algebra fact that googling or pulling out a standard reference should have confirmed.
Just to be clear (since on the basis of some off-line
communications it
did not seem to be clear): A real, symmetric matrix is Hermitian
(and
therefore all of its eigenvalues are real). Further, it is positive-
definite if and only if its eigenvalues are all positive.
qwe<-c(2,-1,0,-1,2,-1,0,1,2)
q<-matrix(qwe,nrow=3)
isPosDef <- function(M) { if ( all(M == t(M) ) ) { # first test
symmetric-ity
if ( all(eigen(M)$values > 0) )
{TRUE}
else {FALSE} } #
else {FALSE} # not symmetric
}
isPosDef(q)
[1] FALSE
m [,1] [,2] [,3] [,4] [,5] [1,] 1.0 0.0 0.5 -0.3 0.2 [2,] 0.0 1.0 0.1 0.0 0.0 [3,] 0.5 0.1 1.0 0.3 0.7 [4,] -0.3 0.0 0.3 1.0 0.4 [5,] 0.2 0.0 0.7 0.4 1.0
isPosDef(m)
[1] TRUE You might want to look at prior postings by people more knowledgeable than me: http://finzi.psych.upenn.edu/R/Rhelp02/archive/57794.html Or look at what are probably better solutions in available packages: http://finzi.psych.upenn.edu/R/library/corpcor/html/rank.condition.html http://finzi.psych.upenn.edu/R/library/matrixcalc/html/is.positive.definit e.html -- David.
this is the matrix that I know is positive definite.
eigen(m)
$values
[1] 2.0654025 1.3391291 1.0027378 0.3956079 0.1971228
$vectors
[,1] [,2] [,3] [,4] [,5]
[1,] -0.32843233 0.69840166 0.080549876 0.44379474 0.44824689
[2,] -0.06080335 0.03564769 -0.993062427 -0.01474690 0.09296096
[3,] -0.64780034 0.12089168 -0.027187620 0.08912912 -0.74636235
[4,] -0.31765040 -0.68827876 0.007856812 0.60775962 0.23651023
[5,] -0.60653780 -0.15040584 0.080856897 -0.65231358 0.42123526
and this are the eigenvalues and eigenvectors.
I thought of using
eigen(m,only.values=T)
$values
[1] 2.0654025 1.3391291 1.0027378 0.3956079 0.1971228
$vectors
NULL
m <- matrix(scan(textConnection("
1.0 0.0 0.5 -0.3 0.2 0.0 1.0 0.1 0.0 0.0 0.5 0.1 1.0 0.3 0.7 -0.3 0.0 0.3 1.0 0.4 0.2 0.0 0.7 0.4 1.0 ")), 5, byrow=TRUE) #Read 25 items
m
[,1] [,2] [,3] [,4] [,5] [1,] 1.0 0.0 0.5 -0.3 0.2 [2,] 0.0 1.0 0.1 0.0 0.0 [3,] 0.5 0.1 1.0 0.3 0.7 [4,] -0.3 0.0 0.3 1.0 0.4 [5,] 0.2 0.0 0.7 0.4 1.0 all( eigen(m)$values >0 ) #[1] TRUE
Then i thought of using logical expression to determine if there are negative eigenvalues but couldnt work. I dont know what error this is b<-(a<0) Error: (list) object cannot be coerced to type 'double'
??? where did "a" and "b" come from?
David Winsemius, MD West Hartford, CT
______________________________________________ R-help at r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
David Winsemius, MD West Hartford, CT
______________________________________________ R-help at r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting- guide.html and provide commented, minimal, self-contained, reproducible code.
David Winsemius, MD West Hartford, CT
On Jan 29, 2011, at 10:11 AM, David Winsemius wrote:
On Jan 29, 2011, at 9:59 AM, John Fox wrote:
Dear David and Alex, I'd be a little careful about testing exact equality as in all(M == t(M) and careful as well about a test such as all(eigen(M)$values > 0) since real arithmetic on a computer can't be counted on to be exact.
Which was why I pointed to that thread from 2005 and the existing work that had been put into packages. If you want to substitute all.equal for all, there might be fewer numerical false alarms, but I would think there could be other potential problems that might deserve warnings.
In addition to the two "is." functions cited earlier there is also a "posdefify" function by Maechler in the sfsmisc package: " Description : From a matrix m, construct a "close" positive definite one."
David.
>>
>>>
>>>
>>> On Jan 29, 2011, at 7:58 AM, David Winsemius wrote:
>>>
>>>>
>>>> On Jan 29, 2011, at 7:22 AM, Alex Smith wrote:
>>>>
>>>>> Hello I am trying to determine wether a given matrix is
>>>>> symmetric and
>>>>> positive matrix. The matrix has real valued elements.
>>>>>
>>>>> I have been reading about the cholesky method and another method
>>>>> is
>>>>> to find the eigenvalues. I cant understand how to implement
>>>>> either of
>>>>> the two. Can someone point me to the right direction. I have used
>>>>> ?chol to see the help but if the matrix is not positive definite
>>>>> it
>>>>> comes up as error. I know how to the get the eigenvalues but how
>>>>> can
>>>>> I then put this into a program to check them as the just come up
>>>>> with
>>>>> $values.
>>>>>
>>>>> Is checking that the eigenvalues are positive enough to determine
>>>>> wether the matrix is positive definite?
>>>>
>>>> That is a fairly simple linear algebra fact that googling or
>>>> pulling
>>>> out a standard reference should have confirmed.
>>>
>>> Just to be clear (since on the basis of some off-line
>>> communications it
>>> did not seem to be clear): A real, symmetric matrix is Hermitian
>>> (and
>>> therefore all of its eigenvalues are real). Further, it is positive-
>>> definite if and only if its eigenvalues are all positive.
>>>
>>> qwe<-c(2,-1,0,-1,2,-1,0,1,2)
>>> q<-matrix(qwe,nrow=3)
>>>
>>> isPosDef <- function(M) { if ( all(M == t(M) ) ) { # first test
>>> symmetric-ity
>>> if ( all(eigen(M)$values > 0) )
>>> {TRUE}
>>> else {FALSE} } #
>>> else {FALSE} # not symmetric
>>>
>>> }
>>>
>>>> isPosDef(q)
>>> [1] FALSE
>>>
>>>>
>>>>>
>>>>> m
>>>>> [,1] [,2] [,3] [,4] [,5]
>>>>> [1,] 1.0 0.0 0.5 -0.3 0.2
>>>>> [2,] 0.0 1.0 0.1 0.0 0.0
>>>>> [3,] 0.5 0.1 1.0 0.3 0.7
>>>>> [4,] -0.3 0.0 0.3 1.0 0.4
>>>>> [5,] 0.2 0.0 0.7 0.4 1.0
>>>
>>>> isPosDef(m)
>>> [1] TRUE
>>>
>>> You might want to look at prior postings by people more
>>> knowledgeable than
>>> me:
>>>
>>> http://finzi.psych.upenn.edu/R/Rhelp02/archive/57794.html
>>>
>>> Or look at what are probably better solutions in available packages:
>>>
>>> http://finzi.psych.upenn.edu/R/library/corpcor/html/rank.condition.html
>>> http://finzi.psych.upenn.edu/R/library/matrixcalc/html/is.positive.definit
>>> e.html
>>>
>>>
>>> --
>>> David.
>>>
>>>>>
>>>>> this is the matrix that I know is positive definite.
>>>>>
>>>>> eigen(m)
>>>>> $values
>>>>> [1] 2.0654025 1.3391291 1.0027378 0.3956079 0.1971228
>>>>>
>>>>> $vectors
>>>>> [,1] [,2] [,3] [,4] [,5]
>>>>> [1,] -0.32843233 0.69840166 0.080549876 0.44379474 0.44824689
>>>>> [2,] -0.06080335 0.03564769 -0.993062427 -0.01474690 0.09296096
>>>>> [3,] -0.64780034 0.12089168 -0.027187620 0.08912912 -0.74636235
>>>>> [4,] -0.31765040 -0.68827876 0.007856812 0.60775962 0.23651023
>>>>> [5,] -0.60653780 -0.15040584 0.080856897 -0.65231358 0.42123526
>>>>>
>>>>> and this are the eigenvalues and eigenvectors.
>>>>> I thought of using
>>>>> eigen(m,only.values=T)
>>>>> $values
>>>>> [1] 2.0654025 1.3391291 1.0027378 0.3956079 0.1971228
>>>>>
>>>>> $vectors
>>>>> NULL
>>>>>
>>>>
>>>>> m <- matrix(scan(textConnection("
>>>> 1.0 0.0 0.5 -0.3 0.2
>>>> 0.0 1.0 0.1 0.0 0.0
>>>> 0.5 0.1 1.0 0.3 0.7
>>>> -0.3 0.0 0.3 1.0 0.4
>>>> 0.2 0.0 0.7 0.4 1.0
>>>> ")), 5, byrow=TRUE)
>>>> #Read 25 items
>>>>> m
>>>> [,1] [,2] [,3] [,4] [,5]
>>>> [1,] 1.0 0.0 0.5 -0.3 0.2
>>>> [2,] 0.0 1.0 0.1 0.0 0.0
>>>> [3,] 0.5 0.1 1.0 0.3 0.7
>>>> [4,] -0.3 0.0 0.3 1.0 0.4
>>>> [5,] 0.2 0.0 0.7 0.4 1.0
>>>>
>>>> all( eigen(m)$values >0 )
>>>> #[1] TRUE
>>>>
>>>>> Then i thought of using logical expression to determine if there
>>>>> are
>>>>> negative eigenvalues but couldnt work. I dont know what error
>>>>> this is
>>>>>
>>>>> b<-(a<0)
>>>>> Error: (list) object cannot be coerced to type 'double'
>>>>
>>>> ??? where did "a" and "b" come from?
>>>>
On Sat, 29 Jan 2011, David Winsemius wrote:
On Jan 29, 2011, at 10:11 AM, David Winsemius wrote:
On Jan 29, 2011, at 9:59 AM, John Fox wrote:
Dear David and Alex, I'd be a little careful about testing exact equality as in all(M == t(M) and careful as well about a test such as all(eigen(M)$values > 0) since real arithmetic on a computer can't be counted on to be exact.
Which was why I pointed to that thread from 2005 and the existing work that had been put into packages. If you want to substitute all.equal for all, there might be fewer numerical false alarms, but I would think there could be other potential problems that might deserve warnings.
In addition to the two "is." functions cited earlier there is also a "posdefify" function by Maechler in the sfsmisc package: " Description : From a matrix m, construct a "close" positive definite one."
But again, that is not usually what you want. There is no guarantee that the result is positive-definite enough that the Cholesky decomposition will work. Give up on Cholesky factors unless you have a matrix you know must be symmetric and strictly positive definite, and use the eigendecomposition instead (setting negative eigenvalues to zero). You can then work with the factorization to ensure that (for example) variances are always non-negative because they are always computed as sums of squares. This sort of thing is done in many of the multivariate analysis calculations in R (e.g. cmdscale) and in well-designed packages.
-- David.
On Jan 29, 2011, at 7:58 AM, David Winsemius wrote:
On Jan 29, 2011, at 7:22 AM, Alex Smith wrote:
Hello I am trying to determine wether a given matrix is symmetric and positive matrix. The matrix has real valued elements. I have been reading about the cholesky method and another method is to find the eigenvalues. I cant understand how to implement either of the two. Can someone point me to the right direction. I have used ?chol to see the help but if the matrix is not positive definite it comes up as error. I know how to the get the eigenvalues but how can I then put this into a program to check them as the just come up with $values. Is checking that the eigenvalues are positive enough to determine wether the matrix is positive definite?
That is a fairly simple linear algebra fact that googling or pulling out a standard reference should have confirmed.
Just to be clear (since on the basis of some off-line communications it
did not seem to be clear): A real, symmetric matrix is Hermitian (and
therefore all of its eigenvalues are real). Further, it is positive-
definite if and only if its eigenvalues are all positive.
qwe<-c(2,-1,0,-1,2,-1,0,1,2)
q<-matrix(qwe,nrow=3)
isPosDef <- function(M) { if ( all(M == t(M) ) ) { # first test
symmetric-ity
if ( all(eigen(M)$values > 0) ) {TRUE}
else {FALSE} } #
else {FALSE} # not symmetric
}
isPosDef(q)
[1] FALSE
m [,1] [,2] [,3] [,4] [,5] [1,] 1.0 0.0 0.5 -0.3 0.2 [2,] 0.0 1.0 0.1 0.0 0.0 [3,] 0.5 0.1 1.0 0.3 0.7 [4,] -0.3 0.0 0.3 1.0 0.4 [5,] 0.2 0.0 0.7 0.4 1.0
isPosDef(m)
[1] TRUE You might want to look at prior postings by people more knowledgeable than me: http://finzi.psych.upenn.edu/R/Rhelp02/archive/57794.html Or look at what are probably better solutions in available packages: http://finzi.psych.upenn.edu/R/library/corpcor/html/rank.condition.html http://finzi.psych.upenn.edu/R/library/matrixcalc/html/is.positive.definit e.html -- David.
this is the matrix that I know is positive definite.
eigen(m)
$values
[1] 2.0654025 1.3391291 1.0027378 0.3956079 0.1971228
$vectors
[,1] [,2] [,3] [,4] [,5]
[1,] -0.32843233 0.69840166 0.080549876 0.44379474 0.44824689
[2,] -0.06080335 0.03564769 -0.993062427 -0.01474690 0.09296096
[3,] -0.64780034 0.12089168 -0.027187620 0.08912912 -0.74636235
[4,] -0.31765040 -0.68827876 0.007856812 0.60775962 0.23651023
[5,] -0.60653780 -0.15040584 0.080856897 -0.65231358 0.42123526
and this are the eigenvalues and eigenvectors.
I thought of using
eigen(m,only.values=T)
$values
[1] 2.0654025 1.3391291 1.0027378 0.3956079 0.1971228
$vectors
NULL
m <- matrix(scan(textConnection("
1.0 0.0 0.5 -0.3 0.2 0.0 1.0 0.1 0.0 0.0 0.5 0.1 1.0 0.3 0.7 -0.3 0.0 0.3 1.0 0.4 0.2 0.0 0.7 0.4 1.0 ")), 5, byrow=TRUE) #Read 25 items
m
[,1] [,2] [,3] [,4] [,5] [1,] 1.0 0.0 0.5 -0.3 0.2 [2,] 0.0 1.0 0.1 0.0 0.0 [3,] 0.5 0.1 1.0 0.3 0.7 [4,] -0.3 0.0 0.3 1.0 0.4 [5,] 0.2 0.0 0.7 0.4 1.0 all( eigen(m)$values >0 ) #[1] TRUE
Then i thought of using logical expression to determine if there are negative eigenvalues but couldnt work. I dont know what error this is b<-(a<0) Error: (list) object cannot be coerced to type 'double'
??? where did "a" and "b" come from?
______________________________________________ R-help at r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Brian D. Ripley, ripley at stats.ox.ac.uk Professor of Applied Statistics, http://www.stats.ox.ac.uk/~ripley/ University of Oxford, Tel: +44 1865 272861 (self) 1 South Parks Road, +44 1865 272866 (PA) Oxford OX1 3TG, UK Fax: +44 1865 272595
On Jan 29, 2011, at 12:17 PM, Prof Brian Ripley wrote:
On Sat, 29 Jan 2011, David Winsemius wrote:
On Jan 29, 2011, at 10:11 AM, David Winsemius wrote:
On Jan 29, 2011, at 9:59 AM, John Fox wrote:
Dear David and Alex, I'd be a little careful about testing exact equality as in all(M == t(M) and careful as well about a test such as all(eigen(M)$values > 0) since real arithmetic on a computer can't be counted on to be exact.
Which was why I pointed to that thread from 2005 and the existing work that had been put into packages. If you want to substitute all.equal for all, there might be fewer numerical false alarms, but I would think there could be other potential problems that might deserve warnings.
In addition to the two "is." functions cited earlier there is also a "posdefify" function by Maechler in the sfsmisc package: " Description : From a matrix m, construct a "close" positive definite one."
But again, that is not usually what you want. There is no guarantee that the result is positive-definite enough that the Cholesky decomposition will work.
I don't see a Cholesky decomposition method being used in that function. It appears to my reading to be following what would be called an eigendecomposition.
David.
> Give up on Cholesky factors unless you have a matrix you know must
> be symmetric and strictly positive definite, and use the
> eigendecomposition instead (setting negative eigenvalues to zero).
> You can then work with the factorization to ensure that (for
> example) variances are always non-negative because they are always
> computed as sums of squares.
>
> This sort of thing is done in many of the multivariate analysis
> calculations in R (e.g. cmdscale) and in well-designed packages.
>
>>
>> --
>> David.
>>>>> On Jan 29, 2011, at 7:58 AM, David Winsemius wrote:
>>>>>> On Jan 29, 2011, at 7:22 AM, Alex Smith wrote:
>>>>>>> Hello I am trying to determine wether a given matrix is
>>>>>>> symmetric and
>>>>>>> positive matrix. The matrix has real valued elements.
>>>>>>> I have been reading about the cholesky method and another
>>>>>>> method is
>>>>>>> to find the eigenvalues. I cant understand how to implement
>>>>>>> either of
>>>>>>> the two. Can someone point me to the right direction. I have
>>>>>>> used
>>>>>>> ?chol to see the help but if the matrix is not positive
>>>>>>> definite it
>>>>>>> comes up as error. I know how to the get the eigenvalues but
>>>>>>> how can
>>>>>>> I then put this into a program to check them as the just come
>>>>>>> up with
>>>>>>> $values.
>>>>>>> Is checking that the eigenvalues are positive enough to
>>>>>>> determine
>>>>>>> wether the matrix is positive definite?
>>>>>> That is a fairly simple linear algebra fact that googling or
>>>>>> pulling
>>>>>> out a standard reference should have confirmed.
>>>>> Just to be clear (since on the basis of some off-line
>>>>> communications it
>>>>> did not seem to be clear): A real, symmetric matrix is
>>>>> Hermitian (and
>>>>> therefore all of its eigenvalues are real). Further, it is
>>>>> positive-
>>>>> definite if and only if its eigenvalues are all positive.
>>>>> qwe<-c(2,-1,0,-1,2,-1,0,1,2)
>>>>> q<-matrix(qwe,nrow=3)
>>>>> isPosDef <- function(M) { if ( all(M == t(M) ) ) { # first test
>>>>> symmetric-ity
>>>>> if ( all(eigen(M)$values > 0) )
>>>>> {TRUE}
>>>>> else {FALSE} } #
>>>>> else {FALSE} # not symmetric
>>>>>
>>>>> }
>>>>>> isPosDef(q)
>>>>> [1] FALSE
>>>>>>> m
>>>>>>> [,1] [,2] [,3] [,4] [,5]
>>>>>>> [1,] 1.0 0.0 0.5 -0.3 0.2
>>>>>>> [2,] 0.0 1.0 0.1 0.0 0.0
>>>>>>> [3,] 0.5 0.1 1.0 0.3 0.7
>>>>>>> [4,] -0.3 0.0 0.3 1.0 0.4
>>>>>>> [5,] 0.2 0.0 0.7 0.4 1.0
>>>>>> isPosDef(m)
>>>>> [1] TRUE
>>>>> You might want to look at prior postings by people more
>>>>> knowledgeable than
>>>>> me:
>>>>> http://finzi.psych.upenn.edu/R/Rhelp02/archive/57794.html
>>>>> Or look at what are probably better solutions in available
>>>>> packages:
>>>>> http://finzi.psych.upenn.edu/R/library/corpcor/html/rank.condition.html
>>>>> http://finzi.psych.upenn.edu/R/library/matrixcalc/html/is.positive.definit
>>>>> e.html
>>>>> --
>>>>> David.
>>>>>>> this is the matrix that I know is positive definite.
>>>>>>> eigen(m)
>>>>>>> $values
>>>>>>> [1] 2.0654025 1.3391291 1.0027378 0.3956079 0.1971228
>>>>>>> $vectors
>>>>>>> [,1] [,2] [,3] [,4] [,5]
>>>>>>> [1,] -0.32843233 0.69840166 0.080549876 0.44379474
>>>>>>> 0.44824689
>>>>>>> [2,] -0.06080335 0.03564769 -0.993062427 -0.01474690
>>>>>>> 0.09296096
>>>>>>> [3,] -0.64780034 0.12089168 -0.027187620 0.08912912
>>>>>>> -0.74636235
>>>>>>> [4,] -0.31765040 -0.68827876 0.007856812 0.60775962
>>>>>>> 0.23651023
>>>>>>> [5,] -0.60653780 -0.15040584 0.080856897 -0.65231358
>>>>>>> 0.42123526
>>>>>>> and this are the eigenvalues and eigenvectors.
>>>>>>> I thought of using
>>>>>>> eigen(m,only.values=T)
>>>>>>> $values
>>>>>>> [1] 2.0654025 1.3391291 1.0027378 0.3956079 0.1971228
>>>>>>> $vectors
>>>>>>> NULL
>>>>>>> m <- matrix(scan(textConnection("
>>>>>> 1.0 0.0 0.5 -0.3 0.2
>>>>>> 0.0 1.0 0.1 0.0 0.0
>>>>>> 0.5 0.1 1.0 0.3 0.7
>>>>>> -0.3 0.0 0.3 1.0 0.4
>>>>>> 0.2 0.0 0.7 0.4 1.0
>>>>>> ")), 5, byrow=TRUE)
>>>>>> #Read 25 items
>>>>>>> m
>>>>>> [,1] [,2] [,3] [,4] [,5]
>>>>>> [1,] 1.0 0.0 0.5 -0.3 0.2
>>>>>> [2,] 0.0 1.0 0.1 0.0 0.0
>>>>>> [3,] 0.5 0.1 1.0 0.3 0.7
>>>>>> [4,] -0.3 0.0 0.3 1.0 0.4
>>>>>> [5,] 0.2 0.0 0.7 0.4 1.0
>>>>>> all( eigen(m)$values >0 )
>>>>>> #[1] TRUE
>>>>>>> Then i thought of using logical expression to determine if
>>>>>>> there are
>>>>>>> negative eigenvalues but couldnt work. I dont know what error
>>>>>>> this is
>>>>>>> b<-(a<0)
>>>>>>> Error: (list) object cannot be coerced to type 'double'
>>>>>> ??? where did "a" and "b" come from?
>>
>> ______________________________________________
>> R-help at r-project.org mailing list
>> https://stat.ethz.ch/mailman/listinfo/r-help
>> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
>> and provide commented, minimal, self-contained, reproducible code.
>
> --
> Brian D. Ripley, ripley at stats.ox.ac.uk
> Professor of Applied Statistics, http://www.stats.ox.ac.uk/~ripley/
> University of Oxford, Tel: +44 1865 272861 (self)
> 1 South Parks Road, +44 1865 272866 (PA)
> Oxford OX1 3TG, UK Fax: +44 1865 272595
David Winsemius, MD
West Hartford, CT
On Sat, 29 Jan 2011, David Winsemius wrote:
On Jan 29, 2011, at 12:17 PM, Prof Brian Ripley wrote:
On Sat, 29 Jan 2011, David Winsemius wrote:
On Jan 29, 2011, at 10:11 AM, David Winsemius wrote:
On Jan 29, 2011, at 9:59 AM, John Fox wrote:
Dear David and Alex, I'd be a little careful about testing exact equality as in all(M == t(M) and careful as well about a test such as all(eigen(M)$values > 0) since real arithmetic on a computer can't be counted on to be exact.
Which was why I pointed to that thread from 2005 and the existing work that had been put into packages. If you want to substitute all.equal for all, there might be fewer numerical false alarms, but I would think there could be other potential problems that might deserve warnings.
In addition to the two "is." functions cited earlier there is also a "posdefify" function by Maechler in the sfsmisc package: " Description : From a matrix m, construct a "close" positive definite one."
But again, that is not usually what you want. There is no guarantee that the result is positive-definite enough that the Cholesky decomposition will work.
I don't see a Cholesky decomposition method being used in that function. It appears to my reading to be following what would be called an eigendecomposition.
Correct, but my point is that one does not usually want a "close" positive definite one but a 'square root'.
-- David.
Give up on Cholesky factors unless you have a matrix you know must be symmetric and strictly positive definite, and use the eigendecomposition instead (setting negative eigenvalues to zero). You can then work with the factorization to ensure that (for example) variances are always non-negative because they are always computed as sums of squares. This sort of thing is done in many of the multivariate analysis calculations in R (e.g. cmdscale) and in well-designed packages.
-- David.
On Jan 29, 2011, at 7:58 AM, David Winsemius wrote:
On Jan 29, 2011, at 7:22 AM, Alex Smith wrote:
Hello I am trying to determine wether a given matrix is symmetric and positive matrix. The matrix has real valued elements. I have been reading about the cholesky method and another method is to find the eigenvalues. I cant understand how to implement either of the two. Can someone point me to the right direction. I have used ?chol to see the help but if the matrix is not positive definite it comes up as error. I know how to the get the eigenvalues but how can I then put this into a program to check them as the just come up with $values. Is checking that the eigenvalues are positive enough to determine wether the matrix is positive definite?
That is a fairly simple linear algebra fact that googling or pulling out a standard reference should have confirmed.
Just to be clear (since on the basis of some off-line communications it
did not seem to be clear): A real, symmetric matrix is Hermitian (and
therefore all of its eigenvalues are real). Further, it is positive-
definite if and only if its eigenvalues are all positive.
qwe<-c(2,-1,0,-1,2,-1,0,1,2)
q<-matrix(qwe,nrow=3)
isPosDef <- function(M) { if ( all(M == t(M) ) ) { # first test
symmetric-ity
if ( all(eigen(M)$values > 0) ) {TRUE}
else {FALSE} } #
else {FALSE} # not symmetric
}
isPosDef(q)
[1] FALSE
m [,1] [,2] [,3] [,4] [,5] [1,] 1.0 0.0 0.5 -0.3 0.2 [2,] 0.0 1.0 0.1 0.0 0.0 [3,] 0.5 0.1 1.0 0.3 0.7 [4,] -0.3 0.0 0.3 1.0 0.4 [5,] 0.2 0.0 0.7 0.4 1.0
isPosDef(m)
[1] TRUE You might want to look at prior postings by people more knowledgeable than me: http://finzi.psych.upenn.edu/R/Rhelp02/archive/57794.html Or look at what are probably better solutions in available packages: http://finzi.psych.upenn.edu/R/library/corpcor/html/rank.condition.html http://finzi.psych.upenn.edu/R/library/matrixcalc/html/is.positive.definit e.html -- David.
this is the matrix that I know is positive definite.
eigen(m)
$values
[1] 2.0654025 1.3391291 1.0027378 0.3956079 0.1971228
$vectors
[,1] [,2] [,3] [,4] [,5]
[1,] -0.32843233 0.69840166 0.080549876 0.44379474 0.44824689
[2,] -0.06080335 0.03564769 -0.993062427 -0.01474690 0.09296096
[3,] -0.64780034 0.12089168 -0.027187620 0.08912912 -0.74636235
[4,] -0.31765040 -0.68827876 0.007856812 0.60775962 0.23651023
[5,] -0.60653780 -0.15040584 0.080856897 -0.65231358 0.42123526
and this are the eigenvalues and eigenvectors.
I thought of using
eigen(m,only.values=T)
$values
[1] 2.0654025 1.3391291 1.0027378 0.3956079 0.1971228
$vectors
NULL
m <- matrix(scan(textConnection("
1.0 0.0 0.5 -0.3 0.2 0.0 1.0 0.1 0.0 0.0 0.5 0.1 1.0 0.3 0.7 -0.3 0.0 0.3 1.0 0.4 0.2 0.0 0.7 0.4 1.0 ")), 5, byrow=TRUE) #Read 25 items
m
[,1] [,2] [,3] [,4] [,5] [1,] 1.0 0.0 0.5 -0.3 0.2 [2,] 0.0 1.0 0.1 0.0 0.0 [3,] 0.5 0.1 1.0 0.3 0.7 [4,] -0.3 0.0 0.3 1.0 0.4 [5,] 0.2 0.0 0.7 0.4 1.0 all( eigen(m)$values >0 ) #[1] TRUE
Then i thought of using logical expression to determine if there are negative eigenvalues but couldnt work. I dont know what error this is b<-(a<0) Error: (list) object cannot be coerced to type 'double'
??? where did "a" and "b" come from?
______________________________________________ R-help at r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
-- Brian D. Ripley, ripley at stats.ox.ac.uk Professor of Applied Statistics, http://www.stats.ox.ac.uk/~ripley/ University of Oxford, Tel: +44 1865 272861 (self) 1 South Parks Road, +44 1865 272866 (PA) Oxford OX1 3TG, UK Fax: +44 1865 272595
David Winsemius, MD West Hartford, CT
Brian D. Ripley, ripley at stats.ox.ac.uk Professor of Applied Statistics, http://www.stats.ox.ac.uk/~ripley/ University of Oxford, Tel: +44 1865 272861 (self) 1 South Parks Road, +44 1865 272866 (PA) Oxford OX1 3TG, UK Fax: +44 1865 272595
Prof Brian Ripley <ripley at stats.ox.ac.uk>
on Sat, 29 Jan 2011 21:00:19 +0000 (GMT) writes:
> On Sat, 29 Jan 2011, David Winsemius wrote:
>>
>> On Jan 29, 2011, at 12:17 PM, Prof Brian Ripley wrote:
>>
>>> On Sat, 29 Jan 2011, David Winsemius wrote:
>>>
>>>>
>>>> On Jan 29, 2011, at 10:11 AM, David Winsemius wrote:
>>>>
>>>>> On Jan 29, 2011, at 9:59 AM, John Fox wrote:
Dear David and Alex, I'd be a little careful about testing exact equality as in all(M == t(M) and careful as well about a test such as all(eigen(M)$values > 0) since real arithmetic on a computer can't be counted on to be exact.
>>>>> Which was why I pointed to that thread from 2005 and the
>>>>> existing work that had been put into packages. If you want
>>>>> to substitute all.equal for all, there might be fewer
>>>>> numerical false alarms, but I would think there could be
>>>>> other potential problems that might deserve warnings.
>>>>
>>>> In addition to the two "is." functions cited earlier there is
>>>> also a "posdefify" function by Maechler in the sfsmisc
>>>> package: " Description : From a matrix m, construct a "close"
>>>> positive definite one."
>>>
>>> But again, that is not usually what you want. There is no
>>> guarantee that the result is positive-definite enough that the
>>> Cholesky decomposition will work.
>>
>> I don't see a Cholesky decomposition method being used in that
>> function. It appears to my reading to be following what would
>> be called an eigendecomposition.
> Correct, but my point is that one does not usually want a
> "close" positive definite one
> but a 'square root'.
Probably, but the more "unusual" case is still a not at all
uncommon,
and the help page of posdefify wchih David mentioned has some
relative recent litterature references.
Further, it has had the following note, for a while.
Note:
As we found out, there are more sophisticated algorithms to solve
this and related problems. See the references and the ?nearPD()?
function in the ?Matrix? package.
and when you look at the nearPD() function in Matrix and
notably its help page, you see that it's based on relatively
recent published research and algorithms from the NA community.
Martin
>>
>> --
>> David.
>>
>>
>>> Give up on Cholesky factors unless you have a matrix you know
>>> must be symmetric and strictly positive definite, and use the
>>> eigendecomposition instead (setting negative eigenvalues to
>>> zero). You can then work with the factorization to ensure
>>> that (for example) variances are always non-negative because
>>> they are always computed as sums of squares.
>>>
>>> This sort of thing is done in many of the multivariate
>>> analysis calculations in R (e.g. cmdscale) and in
>>> well-designed packages.
>>>
>>>>
>>>> --
>>>> David.
>>>>>>> On Jan 29, 2011, at 7:58 AM, David Winsemius wrote:
>>>>>>>> On Jan 29, 2011, at 7:22 AM, Alex Smith wrote:
>>>>>>>>> Hello I am trying to determine wether a given matrix is symmetric and
>>>>>>>>> positive matrix. The matrix has real valued elements.
>>>>>>>>> I have been reading about the cholesky method and another method is
>>>>>>>>> to find the eigenvalues. I cant understand how to implement either of
>>>>>>>>> the two. Can someone point me to the right direction. I have used
>>>>>>>>> ?chol to see the help but if the matrix is not positive definite it
>>>>>>>>> comes up as error. I know how to the get the eigenvalues but how can
>>>>>>>>> I then put this into a program to check them as the just come up with
>>>>>>>>> $values.
>>>>>>>>> Is checking that the eigenvalues are positive enough to determine
>>>>>>>>> wether the matrix is positive definite?
>>>>>>>> That is a fairly simple linear algebra fact that googling or pulling
>>>>>>>> out a standard reference should have confirmed.
>>>>>>> Just to be clear (since on the basis of some off-line communications it
>>>>>>> did not seem to be clear): A real, symmetric matrix is Hermitian (and
>>>>>>> therefore all of its eigenvalues are real). Further, it is positive-
>>>>>>> definite if and only if its eigenvalues are all positive.
>>>>>>> qwe<-c(2,-1,0,-1,2,-1,0,1,2)
>>>>>>> q<-matrix(qwe,nrow=3)
>>>>>>> isPosDef <- function(M) { if ( all(M == t(M) ) ) { # first test
>>>>>>> symmetric-ity
>>>>>>> if ( all(eigen(M)$values > 0) ) {TRUE}
>>>>>>> else {FALSE} } #
>>>>>>> else {FALSE} # not symmetric
>>>>>>>
>>>>>>> }
>>>>>>>> isPosDef(q)
>>>>>>> [1] FALSE
>>>>>>>>> m
>>>>>>>>> [,1] [,2] [,3] [,4] [,5]
>>>>>>>>> [1,] 1.0 0.0 0.5 -0.3 0.2
>>>>>>>>> [2,] 0.0 1.0 0.1 0.0 0.0
>>>>>>>>> [3,] 0.5 0.1 1.0 0.3 0.7
>>>>>>>>> [4,] -0.3 0.0 0.3 1.0 0.4
>>>>>>>>> [5,] 0.2 0.0 0.7 0.4 1.0
>>>>>>>> isPosDef(m)
>>>>>>> [1] TRUE
>>>>>>> You might want to look at prior postings by people more knowledgeable
>>>>>>> than
>>>>>>> me:
>>>>>>> http://finzi.psych.upenn.edu/R/Rhelp02/archive/57794.html
>>>>>>> Or look at what are probably better solutions in available packages:
>>>>>>> http://finzi.psych.upenn.edu/R/library/corpcor/html/rank.condition.html
>>>>>>> http://finzi.psych.upenn.edu/R/library/matrixcalc/html/is.positive.definit
>>>>>>> e.html
>>>>>>> --
>>>>>>> David.
>>>>>>>>> this is the matrix that I know is positive definite.
>>>>>>>>> eigen(m)
>>>>>>>>> $values
>>>>>>>>> [1] 2.0654025 1.3391291 1.0027378 0.3956079 0.1971228
>>>>>>>>> $vectors
>>>>>>>>> [,1] [,2] [,3] [,4] [,5]
>>>>>>>>> [1,] -0.32843233 0.69840166 0.080549876 0.44379474 0.44824689
>>>>>>>>> [2,] -0.06080335 0.03564769 -0.993062427 -0.01474690 0.09296096
>>>>>>>>> [3,] -0.64780034 0.12089168 -0.027187620 0.08912912 -0.74636235
>>>>>>>>> [4,] -0.31765040 -0.68827876 0.007856812 0.60775962 0.23651023
>>>>>>>>> [5,] -0.60653780 -0.15040584 0.080856897 -0.65231358 0.42123526
>>>>>>>>> and this are the eigenvalues and eigenvectors.
>>>>>>>>> I thought of using
>>>>>>>>> eigen(m,only.values=T)
>>>>>>>>> $values
>>>>>>>>> [1] 2.0654025 1.3391291 1.0027378 0.3956079 0.1971228
>>>>>>>>> $vectors
>>>>>>>>> NULL
>>>>>>>>> m <- matrix(scan(textConnection("
>>>>>>>> 1.0 0.0 0.5 -0.3 0.2
>>>>>>>> 0.0 1.0 0.1 0.0 0.0
>>>>>>>> 0.5 0.1 1.0 0.3 0.7
>>>>>>>> -0.3 0.0 0.3 1.0 0.4
>>>>>>>> 0.2 0.0 0.7 0.4 1.0
>>>>>>>> ")), 5, byrow=TRUE)
>>>>>>>> #Read 25 items
>>>>>>>>> m
>>>>>>>> [,1] [,2] [,3] [,4] [,5]
>>>>>>>> [1,] 1.0 0.0 0.5 -0.3 0.2
>>>>>>>> [2,] 0.0 1.0 0.1 0.0 0.0
>>>>>>>> [3,] 0.5 0.1 1.0 0.3 0.7
>>>>>>>> [4,] -0.3 0.0 0.3 1.0 0.4
>>>>>>>> [5,] 0.2 0.0 0.7 0.4 1.0
>>>>>>>> all( eigen(m)$values >0 )
>>>>>>>> #[1] TRUE
>>>>>>>>> Then i thought of using logical expression to determine if there are
>>>>>>>>> negative eigenvalues but couldnt work. I dont know what error this is
>>>>>>>>> b<-(a<0)
>>>>>>>>> Error: (list) object cannot be coerced to type 'double'
>>>>>>>> ??? where did "a" and "b" come from?
>>>>
>>>> ______________________________________________
>>>> R-help at r-project.org mailing list
>>>> https://stat.ethz.ch/mailman/listinfo/r-help
>>>> PLEASE do read the posting guide
>>>> http://www.R-project.org/posting-guide.html
>>>> and provide commented, minimal, self-contained, reproducible code.
>>>
>>> --
>>> Brian D. Ripley, ripley at stats.ox.ac.uk
>>> Professor of Applied Statistics, http://www.stats.ox.ac.uk/~ripley/
>>> University of Oxford, Tel: +44 1865 272861 (self)
>>> 1 South Parks Road, +44 1865 272866 (PA)
>>> Oxford OX1 3TG, UK Fax: +44 1865 272595
>>
>> David Winsemius, MD
>> West Hartford, CT
> --
> Brian D. Ripley, ripley at stats.ox.ac.uk
> Professor of Applied Statistics, http://www.stats.ox.ac.uk/~ripley/
> University of Oxford, Tel: +44 1865 272861 (self)
> 1 South Parks Road, +44 1865 272866 (PA)
> Oxford OX1 3TG, UK Fax: +44 1865 272595
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
An embedded and charset-unspecified text was scrubbed... Name: not available URL: <https://stat.ethz.ch/pipermail/r-help/attachments/20110130/ab104f56/attachment.pl>
I think the bottom line can be summarized as follows:
1. Give up on Cholesky factors unless you have a matrix
you know must be symmetric and strictly positive definite. (I seem to
recall having had problems with chol even with matrices that were
theoretically positive or nonnegative definite but were not because of
round off. However, I can't produce an example right now, so I'm not
sure of that.)
2. If you must test whether a matrix is summetric, try
all.equal(A, t(A)). From the discussion, I had the impression that this
might not always do what you want, but it should be better than
all(A==t(A)). It is better still to decide from theory whether the
matrix should be symmetric.
3. Work with the Ae = eigen(A, symmetric=TRUE) or
eigen((A+t(A))/2, symmetric=TRUE). From here, Ae$values <-
pmax(Ae$values, 0) ensures that A will be positive semidefinite (aka
nonnegative definite). If it must be positive definite, use Ae$values
<- pmax(Ae$values, eps), with eps>0 chosen to make it as positive
definite as you want.
4. To the maximum extent feasible, work with Ae, not A.
Prof. Ripley noted, "You can then work with [this] factorization to
ensure that (for example) variances are always non-negative because they
are always computed as sums of squares. This sort of thing is done in
many of the multivariate analysis calculations in R (e.g. cmdscale) and
in well-designed packages."
Hope this helps.
Spencer
On 1/30/2011 3:02 AM, Alex Smith wrote:
Thank you for all your input but I'm afraid I dont know what the final conclusion is. I will have to check the the eigenvalues if any are negative. Why would setting them to zero make a difference? Sorry to drag this on. Thanks On Sat, Jan 29, 2011 at 9:00 PM, Prof Brian Ripley<ripley at stats.ox.ac.uk>wrote:
On Sat, 29 Jan 2011, David Winsemius wrote:
On Jan 29, 2011, at 12:17 PM, Prof Brian Ripley wrote: On Sat, 29 Jan 2011, David Winsemius wrote:
On Jan 29, 2011, at 10:11 AM, David Winsemius wrote: On Jan 29, 2011, at 9:59 AM, John Fox wrote:
Dear David and Alex, I'd be a little careful about testing exact equality as in all(M == t(M) and careful as well about a test such as all(eigen(M)$values> 0) since real arithmetic on a computer can't be counted on to be exact.
Which was why I pointed to that thread from 2005 and the existing work that had been put into packages. If you want to substitute all.equal for all, there might be fewer numerical false alarms, but I would think there could be other potential problems that might deserve warnings.
In addition to the two "is." functions cited earlier there is also a "posdefify" function by Maechler in the sfsmisc package: " Description : From a matrix m, construct a "close" positive definite one."
But again, that is not usually what you want. There is no guarantee that the result is positive-definite enough that the Cholesky decomposition will work.
I don't see a Cholesky decomposition method being used in that function. It appears to my reading to be following what would be called an eigendecomposition.
Correct, but my point is that one does not usually want a "close" positive definite one but a 'square root'.
-- David. Give up on Cholesky factors unless you have a matrix you know must be
symmetric and strictly positive definite, and use the eigendecomposition instead (setting negative eigenvalues to zero). You can then work with the factorization to ensure that (for example) variances are always non-negative because they are always computed as sums of squares. This sort of thing is done in many of the multivariate analysis calculations in R (e.g. cmdscale) and in well-designed packages.
-- David.
On Jan 29, 2011, at 7:58 AM, David Winsemius wrote:
On Jan 29, 2011, at 7:22 AM, Alex Smith wrote:
Hello I am trying to determine wether a given matrix is symmetric and positive matrix. The matrix has real valued elements. I have been reading about the cholesky method and another method is to find the eigenvalues. I cant understand how to implement either of the two. Can someone point me to the right direction. I have used ?chol to see the help but if the matrix is not positive definite it comes up as error. I know how to the get the eigenvalues but how can I then put this into a program to check them as the just come up with $values. Is checking that the eigenvalues are positive enough to determine wether the matrix is positive definite?
That is a fairly simple linear algebra fact that googling or pulling out a standard reference should have confirmed.
Just to be clear (since on the basis of some off-line communications
it
did not seem to be clear): A real, symmetric matrix is Hermitian
(and
therefore all of its eigenvalues are real). Further, it is positive-
definite if and only if its eigenvalues are all positive.
qwe<-c(2,-1,0,-1,2,-1,0,1,2)
q<-matrix(qwe,nrow=3)
isPosDef<- function(M) { if ( all(M == t(M) ) ) { # first test
symmetric-ity
if ( all(eigen(M)$values> 0) ) {TRUE}
else {FALSE} } #
else {FALSE} # not symmetric
}
isPosDef(q)
[1] FALSE
m
[,1] [,2] [,3] [,4] [,5] [1,] 1.0 0.0 0.5 -0.3 0.2 [2,] 0.0 1.0 0.1 0.0 0.0 [3,] 0.5 0.1 1.0 0.3 0.7 [4,] -0.3 0.0 0.3 1.0 0.4 [5,] 0.2 0.0 0.7 0.4 1.0
isPosDef(m)
[1] TRUE You might want to look at prior postings by people more knowledgeable than me: http://finzi.psych.upenn.edu/R/Rhelp02/archive/57794.html Or look at what are probably better solutions in available packages: http://finzi.psych.upenn.edu/R/library/corpcor/html/rank.condition.html http://finzi.psych.upenn.edu/R/library/matrixcalc/html/is.positive.definit e.html -- David.
this is the matrix that I know is positive definite.
eigen(m)
$values
[1] 2.0654025 1.3391291 1.0027378 0.3956079 0.1971228
$vectors
[,1] [,2] [,3] [,4] [,5]
[1,] -0.32843233 0.69840166 0.080549876 0.44379474 0.44824689
[2,] -0.06080335 0.03564769 -0.993062427 -0.01474690 0.09296096
[3,] -0.64780034 0.12089168 -0.027187620 0.08912912 -0.74636235
[4,] -0.31765040 -0.68827876 0.007856812 0.60775962 0.23651023
[5,] -0.60653780 -0.15040584 0.080856897 -0.65231358 0.42123526
and this are the eigenvalues and eigenvectors.
I thought of using
eigen(m,only.values=T)
$values
[1] 2.0654025 1.3391291 1.0027378 0.3956079 0.1971228
$vectors
NULL
m<- matrix(scan(textConnection("
1.0 0.0 0.5 -0.3 0.2 0.0 1.0 0.1 0.0 0.0 0.5 0.1 1.0 0.3 0.7 -0.3 0.0 0.3 1.0 0.4 0.2 0.0 0.7 0.4 1.0 ")), 5, byrow=TRUE) #Read 25 items
m
[,1] [,2] [,3] [,4] [,5] [1,] 1.0 0.0 0.5 -0.3 0.2 [2,] 0.0 1.0 0.1 0.0 0.0 [3,] 0.5 0.1 1.0 0.3 0.7 [4,] -0.3 0.0 0.3 1.0 0.4 [5,] 0.2 0.0 0.7 0.4 1.0 all( eigen(m)$values>0 ) #[1] TRUE
Then i thought of using logical expression to determine if there are negative eigenvalues but couldnt work. I dont know what error this is b<-(a<0) Error: (list) object cannot be coerced to type 'double'
??? where did "a" and "b" come from?
______________________________________________ R-help at r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
-- Brian D. Ripley, ripley at stats.ox.ac.uk Professor of Applied Statistics, http://www.stats.ox.ac.uk/~ripley/ University of Oxford, Tel: +44 1865 272861 (self) 1 South Parks Road, +44 1865 272866 (PA) Oxford OX1 3TG, UK Fax: +44 1865 272595
David Winsemius, MD West Hartford, CT
-- Brian D. Ripley, ripley at stats.ox.ac.uk Professor of Applied Statistics, http://www.stats.ox.ac.uk/~ripley/ University of Oxford, Tel: +44 1865 272861 (self) 1 South Parks Road, +44 1865 272866 (PA) Oxford OX1 3TG, UK Fax: +44 1865 272595
______________________________________________ R-help at r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
[[alternative HTML version deleted]]
______________________________________________ R-help at r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
On Jan 30, 2011, at 6:02 AM, Alex Smith wrote:
Thank you for all your input but I'm afraid I dont know what the final conclusion is. I will have to check the the eigenvalues if any are negative. Why would setting them to zero make a difference? Sorry to drag this on.
The discussion is proceeding on the assumption that the "true" matrix is PD and that only because of numerical imprecision has a negative eigenvalue been reported. You would only decide to set the negative eigenvalues to zero if you had prior knowledge that the matrix _should_ be PD and that you needed to so something further with the matrix on that basis. Usually the matrices in question are the result of many calculations that may have introduced sufficient numerical round-off error to distort the result.
David.
>
> Thanks
>
> On Sat, Jan 29, 2011 at 9:00 PM, Prof Brian Ripley <ripley at stats.ox.ac.uk
> > wrote:
> On Sat, 29 Jan 2011, David Winsemius wrote:
>
>
> On Jan 29, 2011, at 12:17 PM, Prof Brian Ripley wrote:
>
> On Sat, 29 Jan 2011, David Winsemius wrote:
>
>
> On Jan 29, 2011, at 10:11 AM, David Winsemius wrote:
>
> On Jan 29, 2011, at 9:59 AM, John Fox wrote:
> Dear David and Alex,
> I'd be a little careful about testing exact equality as in all(M ==
> t(M) and
> careful as well about a test such as all(eigen(M)$values > 0) since
> real
> arithmetic on a computer can't be counted on to be exact.
> Which was why I pointed to that thread from 2005 and the existing
> work that had been put into packages. If you want to substitute
> all.equal for all, there might be fewer numerical false alarms, but
> I would think there could be other potential problems that might
> deserve warnings.
>
> In addition to the two "is." functions cited earlier there is also a
> "posdefify" function by Maechler in the sfsmisc package: "
> Description : From a matrix m, construct a "close" positive definite
> one."
>
> But again, that is not usually what you want. There is no guarantee
> that the result is positive-definite enough that the Cholesky
> decomposition will work.
>
> I don't see a Cholesky decomposition method being used in that
> function. It appears to my reading to be following what would be
> called an eigendecomposition.
>
> Correct, but my point is that one does not usually want a
>
> "close" positive definite one
>
> but a 'square root'.
>
>
>
> --
> David.
>
>
> Give up on Cholesky factors unless you have a matrix you know must
> be symmetric and strictly positive definite, and use the
> eigendecomposition instead (setting negative eigenvalues to zero).
> You can then work with the factorization to ensure that (for
> example) variances are always non-negative because they are always
> computed as sums of squares.
>
> This sort of thing is done in many of the multivariate analysis
> calculations in R (e.g. cmdscale) and in well-designed packages.
>
>
> --
> David.
> On Jan 29, 2011, at 7:58 AM, David Winsemius wrote:
> On Jan 29, 2011, at 7:22 AM, Alex Smith wrote:
> Hello I am trying to determine wether a given matrix is symmetric and
> positive matrix. The matrix has real valued elements.
> I have been reading about the cholesky method and another method is
> to find the eigenvalues. I cant understand how to implement either of
> the two. Can someone point me to the right direction. I have used
> ?chol to see the help but if the matrix is not positive definite it
> comes up as error. I know how to the get the eigenvalues but how can
> I then put this into a program to check them as the just come up with
> $values.
> Is checking that the eigenvalues are positive enough to determine
> wether the matrix is positive definite?
> That is a fairly simple linear algebra fact that googling or pulling
> out a standard reference should have confirmed.
> Just to be clear (since on the basis of some off-line communications
> it
> did not seem to be clear): A real, symmetric matrix is Hermitian (and
> therefore all of its eigenvalues are real). Further, it is positive-
> definite if and only if its eigenvalues are all positive.
> qwe<-c(2,-1,0,-1,2,-1,0,1,2)
> q<-matrix(qwe,nrow=3)
> isPosDef <- function(M) { if ( all(M == t(M) ) ) { # first test
> symmetric-ity
> if ( all(eigen(M)$values > 0) ) {TRUE}
> else {FALSE} } #
> else {FALSE} # not symmetric
>
> }
> isPosDef(q)
> [1] FALSE
> m
> [,1] [,2] [,3] [,4] [,5]
> [1,] 1.0 0.0 0.5 -0.3 0.2
> [2,] 0.0 1.0 0.1 0.0 0.0
> [3,] 0.5 0.1 1.0 0.3 0.7
> [4,] -0.3 0.0 0.3 1.0 0.4
> [5,] 0.2 0.0 0.7 0.4 1.0
> isPosDef(m)
> [1] TRUE
> You might want to look at prior postings by people more
> knowledgeable than
> me:
> http://finzi.psych.upenn.edu/R/Rhelp02/archive/57794.html
> Or look at what are probably better solutions in available packages:
> http://finzi.psych.upenn.edu/R/library/corpcor/html/
> rank.condition.html
> http://finzi.psych.upenn.edu/R/library/matrixcalc/html/is.positive.definit
> e.html
> --
> David.
> this is the matrix that I know is positive definite.
> eigen(m)
> $values
> [1] 2.0654025 1.3391291 1.0027378 0.3956079 0.1971228
> $vectors
> [,1] [,2] [,3] [,4] [,5]
> [1,] -0.32843233 0.69840166 0.080549876 0.44379474 0.44824689
> [2,] -0.06080335 0.03564769 -0.993062427 -0.01474690 0.09296096
> [3,] -0.64780034 0.12089168 -0.027187620 0.08912912 -0.74636235
> [4,] -0.31765040 -0.68827876 0.007856812 0.60775962 0.23651023
> [5,] -0.60653780 -0.15040584 0.080856897 -0.65231358 0.42123526
> and this are the eigenvalues and eigenvectors.
> I thought of using
> eigen(m,only.values=T)
> $values
> [1] 2.0654025 1.3391291 1.0027378 0.3956079 0.1971228
> $vectors
> NULL
> m <- matrix(scan(textConnection("
> 1.0 0.0 0.5 -0.3 0.2
> 0.0 1.0 0.1 0.0 0.0
> 0.5 0.1 1.0 0.3 0.7
> -0.3 0.0 0.3 1.0 0.4
> 0.2 0.0 0.7 0.4 1.0
> ")), 5, byrow=TRUE)
> #Read 25 items
> m
> [,1] [,2] [,3] [,4] [,5]
> [1,] 1.0 0.0 0.5 -0.3 0.2
> [2,] 0.0 1.0 0.1 0.0 0.0
> [3,] 0.5 0.1 1.0 0.3 0.7
> [4,] -0.3 0.0 0.3 1.0 0.4
> [5,] 0.2 0.0 0.7 0.4 1.0
> all( eigen(m)$values >0 )
> #[1] TRUE
> Then i thought of using logical expression to determine if there are
> negative eigenvalues but couldnt work. I dont know what error this is
> b<-(a<0)
> Error: (list) object cannot be coerced to type 'double'
> ??? where did "a" and "b" come from?
>
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>
> --
> Brian D. Ripley, ripley at stats.ox.ac.uk
> Professor of Applied Statistics, http://www.stats.ox.ac.uk/~ripley/
> University of Oxford, Tel: +44 1865 272861 (self)
> 1 South Parks Road, +44 1865 272866 (PA)
> Oxford OX1 3TG, UK Fax: +44 1865 272595
>
> David Winsemius, MD
> West Hartford, CT
>
> --
> Brian D. Ripley, ripley at stats.ox.ac.uk
> Professor of Applied Statistics, http://www.stats.ox.ac.uk/~ripley/
> University of Oxford, Tel: +44 1865 272861 (self)
> 1 South Parks Road, +44 1865 272866 (PA)
> Oxford OX1 3TG, UK Fax: +44 1865 272595
>
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>
David Winsemius, MD
West Hartford, CT
Hi, You should know about symmetry from the way in which your matrix is generated. Assuming that it is symmetric, the easiest thing would be to always use the `nearPD' function in the "Matrix" package when you suspect that your matrix could be indefinite. An important thing to keep in mind is how to define lack of positive definiteness. In `nearPD' it is the magnitude of the parameter `eig.tol', which is the absolute value of the ratio of the smallest to largest eigenvalue. The default is 1.e-06, which I think is a bit conservative, given that the machine epsilon is around 1.e-16. I would probably use 1.e-08, which is the square-root of machine epsilon. You could still use `nearPD' if your matrix is "almost" symmetric, since nearPD will work with the symmetric part of the matrix, (A + t(A))/2. Best, Ravi. ____________________________________________________________________ Ravi Varadhan, Ph.D. Assistant Professor, Division of Geriatric Medicine and Gerontology School of Medicine Johns Hopkins University Ph. (410) 502-2619 email: rvaradhan at jhmi.edu ----- Original Message ----- From: Spencer Graves <spencer.graves at structuremonitoring.com> Date: Sunday, January 30, 2011 9:22 am Subject: Re: [R] Positive Definite Matrix To: Alex Smith <alex.smit4 at gmail.com> Cc: r-help at r-project.org, John Fox <jfox at mcmaster.ca>, Prof Brian Ripley <ripley at stats.ox.ac.uk>
I think the bottom line can be summarized as follows:
1. Give up on Cholesky factors unless you have a matrix
you know must be symmetric and strictly positive definite. (I seem to
recall having had problems with chol even with matrices that were
theoretically positive or nonnegative definite but were not because of
round off. However, I can't produce an example right now, so I'm not
sure of that.)
2. If you must test whether a matrix is summetric, try
all.equal(A, t(A)). From the discussion, I had the impression that
this might not always do what you want, but it should be better than
all(A==t(A)). It is better still to decide from theory whether the
matrix should be symmetric.
3. Work with the Ae = eigen(A, symmetric=TRUE) or
eigen((A+t(A))/2, symmetric=TRUE). From here, Ae$values <-
pmax(Ae$values, 0) ensures that A will be positive semidefinite (aka
nonnegative definite). If it must be positive definite, use Ae$values
<- pmax(Ae$values, eps), with eps>0 chosen to make it as positive
definite as you want.
4. To the maximum extent feasible, work with Ae, not A.
Prof. Ripley noted, "You can then work with [this] factorization to
ensure that (for example) variances are always non-negative because
they are always computed as sums of squares. This sort of thing is
done in many of the multivariate analysis calculations in R (e.g.
cmdscale) and in well-designed packages."
Hope this helps.
Spencer
On 1/30/2011 3:02 AM, Alex Smith wrote:
Thank you for all your input but I'm afraid I dont know what the final conclusion is. I will have to check the the eigenvalues if any are negative. Why would setting them to zero make a difference? Sorry to drag this
on.
Thanks On Sat, Jan 29, 2011 at 9:00 PM, Prof Brian Ripley<ripley at stats.ox.ac.uk>wrote:
On Sat, 29 Jan 2011, David Winsemius wrote:
On Jan 29, 2011, at 12:17 PM, Prof Brian Ripley wrote: On Sat, 29 Jan 2011, David Winsemius wrote:
On Jan 29, 2011, at 10:11 AM, David Winsemius wrote: On Jan 29, 2011, at 9:59 AM, John Fox wrote:
Dear David and Alex, I'd be a little careful about testing exact equality as in
all(M ==
t(M) and careful as well about a test such as all(eigen(M)$values> 0) since real arithmetic on a computer can't be counted on to be exact.
Which was why I pointed to that thread from 2005 and the
existing work
that had been put into packages. If you want to substitute
all.equal for
all, there might be fewer numerical false alarms, but I would
think there
could be other potential problems that might deserve warnings.
In addition to the two "is." functions cited earlier there is
also a
"posdefify" function by Maechler in the sfsmisc package: "
Description :
From a matrix m, construct a "close" positive definite one."
But again, that is not usually what you want. There is no
guarantee that
the result is positive-definite enough that the Cholesky
decomposition will
work.
I don't see a Cholesky decomposition method being used in that function. It appears to my reading to be following what would be called an eigendecomposition.
Correct, but my point is that one does not usually want a "close" positive definite one but a 'square root'.
-- David. Give up on Cholesky factors unless you have a matrix you know
must be
symmetric and strictly positive definite, and use the eigendecomposition instead (setting negative eigenvalues to zero). You can then work
with the
factorization to ensure that (for example) variances are always non-negative because they are always computed as sums of squares. This sort of thing is done in many of the multivariate analysis calculations in R (e.g. cmdscale) and in well-designed packages.
-- David.
On Jan 29, 2011, at 7:58 AM, David Winsemius wrote:
On Jan 29, 2011, at 7:22 AM, Alex Smith wrote:
Hello I am trying to determine wether a given matrix is symmetric and positive matrix. The matrix has real valued elements. I have been reading about the cholesky method and another
method is
to find the eigenvalues. I cant understand how to implement
either
of the two. Can someone point me to the right direction. I have
used
?chol to see the help but if the matrix is not positive
definite it
comes up as error. I know how to the get the eigenvalues but
how
can I then put this into a program to check them as the just
come up
with $values. Is checking that the eigenvalues are positive enough to determine wether the matrix is positive definite?
That is a fairly simple linear algebra fact that googling or
pulling
out a standard reference should have confirmed.
Just to be clear (since on the basis of some off-line communications
it
did not seem to be clear): A real, symmetric matrix is Hermitian
(and
therefore all of its eigenvalues are real). Further, it is positive-
definite if and only if its eigenvalues are all positive.
qwe<-c(2,-1,0,-1,2,-1,0,1,2)
q<-matrix(qwe,nrow=3)
isPosDef<- function(M) { if ( all(M == t(M) ) ) { # first test
symmetric-ity
if ( all(eigen(M)$values> 0) ) {TRUE}
else {FALSE} } #
else {FALSE} # not symmetric
}
isPosDef(q)
[1] FALSE
m
[,1] [,2] [,3] [,4] [,5] [1,] 1.0 0.0 0.5 -0.3 0.2 [2,] 0.0 1.0 0.1 0.0 0.0 [3,] 0.5 0.1 1.0 0.3 0.7 [4,] -0.3 0.0 0.3 1.0 0.4 [5,] 0.2 0.0 0.7 0.4 1.0
isPosDef(m)
[1] TRUE You might want to look at prior postings by people more knowledgeable than me: Or look at what are probably better solutions in available packages: e.html -- David.
this is the matrix that I know is positive definite.
eigen(m)
$values
[1] 2.0654025 1.3391291 1.0027378 0.3956079 0.1971228
$vectors
[,1] [,2] [,3] [,4] [,5]
[1,] -0.32843233 0.69840166 0.080549876 0.44379474 0.44824689
[2,] -0.06080335 0.03564769 -0.993062427 -0.01474690 0.09296096
[3,] -0.64780034 0.12089168 -0.027187620 0.08912912 -0.74636235
[4,] -0.31765040 -0.68827876 0.007856812 0.60775962 0.23651023
[5,] -0.60653780 -0.15040584 0.080856897 -0.65231358 0.42123526
and this are the eigenvalues and eigenvectors.
I thought of using
eigen(m,only.values=T)
$values
[1] 2.0654025 1.3391291 1.0027378 0.3956079 0.1971228
$vectors
NULL
m<- matrix(scan(textConnection("
1.0 0.0 0.5 -0.3 0.2 0.0 1.0 0.1 0.0 0.0 0.5 0.1 1.0 0.3 0.7 -0.3 0.0 0.3 1.0 0.4 0.2 0.0 0.7 0.4 1.0 ")), 5, byrow=TRUE) #Read 25 items
m
[,1] [,2] [,3] [,4] [,5] [1,] 1.0 0.0 0.5 -0.3 0.2 [2,] 0.0 1.0 0.1 0.0 0.0 [3,] 0.5 0.1 1.0 0.3 0.7 [4,] -0.3 0.0 0.3 1.0 0.4 [5,] 0.2 0.0 0.7 0.4 1.0 all( eigen(m)$values>0 ) #[1] TRUE
Then i thought of using logical expression to determine if there are negative eigenvalues but couldnt work. I dont know what
error this
is b<-(a<0) Error: (list) object cannot be coerced to type 'double'
??? where did "a" and "b" come from?
______________________________________________ R-help at r-project.org mailing list PLEASE do read the posting guide and provide commented, minimal, self-contained, reproducible code.
-- Brian D. Ripley, ripley at stats.ox.ac.uk Professor of Applied Statistics, University of Oxford, Tel: +44 1865 272861 (self) 1 South Parks Road, +44 1865 272866 (PA) Oxford OX1 3TG, UK Fax: +44 1865 272595
David Winsemius, MD West Hartford, CT
-- Brian D. Ripley, ripley at stats.ox.ac.uk Professor of Applied Statistics, University of Oxford, Tel: +44 1865 272861 (self) 1 South Parks Road, +44 1865 272866 (PA) Oxford OX1 3TG, UK Fax: +44 1865 272595
______________________________________________ R-help at r-project.org mailing list PLEASE do read the posting guide and provide commented, minimal, self-contained, reproducible code.
[[alternative HTML version deleted]]
______________________________________________ R-help at r-project.org mailing list PLEASE do read the posting guide and provide commented, minimal, self-contained, reproducible code.
______________________________________________ R-help at r-project.org mailing list PLEASE do read the posting guide and provide commented, minimal, self-contained, reproducible code.
On Sun, 30 Jan 2011, David Winsemius wrote:
On Jan 30, 2011, at 6:02 AM, Alex Smith wrote:
Thank you for all your input but I'm afraid I dont know what the final conclusion is. I will have to check the the eigenvalues if any are negative. Why would setting them to zero make a difference? Sorry to drag this on.
The discussion is proceeding on the assumption that the "true" matrix is PD and that only because of numerical imprecision has a negative eigenvalue been reported. You would only decide to set the negative eigenvalues to zero if you had prior knowledge that the matrix _should_ be PD and that you needed to so something further with the matrix on that basis. Usually the matrices in question are the result of many calculations that may have introduced sufficient numerical round-off error to distort the result.
In one common scenario you have computed variances and covariances individually, then constructed a var-covar matrix from them. When the true var-covar matrix was nearly singular, a matrix estimated in this way can be negative definite because of different patterns of missing values for different pairs of variables. All true var-covar matrices are non-negative definite: They may be singular (having at least one zero eigenvalue), but they cannot have a negative eigenvalue. Mike
On Sun, 30 Jan 2011, David Winsemius wrote:
On Jan 30, 2011, at 6:02 AM, Alex Smith wrote:
Thank you for all your input but I'm afraid I dont know what the final conclusion is. I will have to check the the eigenvalues if any are negative. Why would setting them to zero make a difference? Sorry to drag this on.
The discussion is proceeding on the assumption that the "true" matrix is PD and that only because of numerical imprecision has a negative eigenvalue been reported. You would only decide to set the negative eigenvalues to zero if you had prior knowledge that the matrix _should_ be PD and that you needed to so something further with the matrix on that basis. Usually the matrices in question are the result of many calculations that may have introduced sufficient numerical round-off error to distort the result.
In one common scenario you have computed variances and covariances individually, then constructed a var-covar matrix from them. When the true var-covar matrix was nearly singular, a matrix estimated in this way can be negative definite because of different patterns of missing values for different pairs of variables. All true var-covar matrices are non-negative definite: They may be singular (having at least one zero eigenvalue), but they cannot have a negative eigenvalue. Mike
I think the bottom line can be summarized as
> follows:
> 1. Give up on Cholesky factors unless you have a
> matrix you know must be symmetric and strictly positive
> definite. (I seem to recall having had problems with chol
> even with matrices that were theoretically positive or
> nonnegative definite but were not because of round off.
> However, I can't produce an example right now, so I'm not
> sure of that.)
(other respondents to this thread mentioned such scenarios, they
are not at all uncommon)
> 2. If you must test whether a matrix is
> summetric, try all.equal(A, t(A)). From the discussion, I
> had the impression that this might not always do what you
> want, but it should be better than all(A==t(A)). It is
> better still to decide from theory whether the matrix
> should be symmetric.
Hmm, yes: Exactly for this reason, R has had a *generic* function
isSymmetric()
-------------
for quite a while:
In "base R" it uses all.equal() {but with tightened default tolerance},
but e.g., in the Matrix package,
it "decides from theory" --- the Matrix package containing
quite a few Matrix classes that are symmetric "by definition".
So my recommendation really is to use isSymmetric().
> 3. Work with the Ae = eigen(A,
> symmetric=TRUE) or eigen((A+t(A))/2, symmetric=TRUE).
> From here, Ae$values <- pmax(Ae$values, 0) ensures that A
> will be positive semidefinite (aka nonnegative definite).
> If it must be positive definite, use Ae$values <-
> pmax(Ae$values, eps), with eps>0 chosen to make it as
> positive definite as you want.
hmm, almost: The above trick has been the origin and basic
building block of posdefify() from the sfsmisc package,
mentioned earlier in this thread.
But I have mentioned that there are much better algorithms
nowadays, and Matrix::nearPD() uses one of them .. actually
with variations on the theme aka optional arguments.
> 4. To the maximum extent feasible, work with
> Ae, not A. Prof. Ripley noted, "You can then work with
> [this] factorization to ensure that (for example)
> variances are always non-negative because they are always
> computed as sums of squares. This sort of thing is done
> in many of the multivariate analysis calculations in R
> (e.g. cmdscale) and in well-designed packages."
yes, or---as mentioned by Prof Ripley as well---compute a
square root of the matrix {e.g. via the eigen() decomposition
with modified eigenvalues} and work with that.
Unfortunately, in quite a few situations you just need a
pos.def. matrix to be passed to another R function as
cov / cor matrix, and their, nearPD() comes very handy.
> Hope this helps. Spencer
It did, thank you,
Martin
> On 1/30/2011 3:02 AM, Alex Smith wrote:
>> Thank you for all your input but I'm afraid I dont know
>> what the final conclusion is. I will have to check the
>> the eigenvalues if any are negative. Why would setting
>> them to zero make a difference? Sorry to drag this on.
>>
>> Thanks
>>
>> On Sat, Jan 29, 2011 at 9:00 PM, Prof Brian
>> Ripley<ripley at stats.ox.ac.uk>wrote:
>>
>>> On Sat, 29 Jan 2011, David Winsemius wrote:
>>>
>>>
>>>> On Jan 29, 2011, at 12:17 PM, Prof Brian Ripley wrote:
>>>>
>>>> On Sat, 29 Jan 2011, David Winsemius wrote:
>>>>>
On Jan 29, 2011, at 10:11 AM, David Winsemius wrote:
>>>>>>
On Jan 29, 2011, at 9:59 AM, John Fox wrote:
>>>>>>>> Dear David and Alex, I'd be a little careful about
>>>>>>>> testing exact equality as in all(M == t(M) and
>>>>>>>> careful as well about a test such as
>>>>>>>> all(eigen(M)$values> 0) since real arithmetic on a
>>>>>>>> computer can't be counted on to be exact.
>>>>>>>>
>>>>>>> Which was why I pointed to that thread from 2005 and
>>>>>>> the existing work that had been put into
>>>>>>> packages. If you want to substitute all.equal for
>>>>>>> all, there might be fewer numerical false alarms,
>>>>>>> but I would think there could be other potential
>>>>>>> problems that might deserve warnings.
>>>>>>>
In addition to the two "is." functions cited earlier there
>>>>>>> is also a
"posdefify" function by Maechler in the sfsmisc package: "
>>>>>>> Description :
From a matrix m, construct a "close" positive definite
>>>>>>> one."
>>>>>>
>>>>> But again, that is not usually what you want. There
>>>>> is no guarantee that the result is positive-definite
>>>>> enough that the Cholesky decomposition will work.
>>>>>
>>>> I don't see a Cholesky decomposition method being used
>>>> in that function. It appears to my reading to be
>>>> following what would be called an eigendecomposition.
>>>>
>>> Correct, but my point is that one does not usually want
>>> a
>>>
>>> "close" positive definite one
>>>
>>> but a 'square root'.
>>>
>>>
>>>
>>>> --
David. Give up on Cholesky factors unless you have a matrix you know must be
symmetric and strictly positive definite, and use the eigendecomposition instead (setting negative eigenvalues to zero). You can then work with the factorization to ensure that (for example) variances are always non-negative because they are always computed as sums of squares. This sort of thing is done in many of the multivariate analysis calculations in R (e.g. cmdscale) and in well-designed packages.
-- David.
On Jan 29, 2011, at 7:58 AM, David Winsemius wrote:
On Jan 29, 2011, at 7:22 AM, Alex Smith wrote:
Hello I am trying to determine wether a given matrix is symmetric and positive matrix. The matrix has real valued elements. I have been reading about the cholesky method and another method is to find the eigenvalues. I cant understand how to implement either of the two. Can someone point me to the right direction. I have used ?chol to see the help but if the matrix is not positive definite it comes up as error. I know how to the get the eigenvalues but how can I then put this into a program to check them as the just come up with $values. Is checking that the eigenvalues are positive enough to determine wether the matrix is positive definite?
That is a fairly simple linear algebra fact that googling or pulling out a standard reference should have confirmed.
Just to be clear (since on the basis of some off-line communications
it
did not seem to be clear): A real, symmetric matrix is Hermitian
(and
therefore all of its eigenvalues are real). Further, it is positive-
definite if and only if its eigenvalues are all positive.
qwe<-c(2,-1,0,-1,2,-1,0,1,2)
q<-matrix(qwe,nrow=3)
isPosDef<- function(M) { if ( all(M == t(M) ) ) { # first test
symmetric-ity
if ( all(eigen(M)$values> 0) ) {TRUE}
else {FALSE} } #
else {FALSE} # not symmetric
}
isPosDef(q)
[1] FALSE
m
[,1] [,2] [,3] [,4] [,5] [1,] 1.0 0.0 0.5 -0.3 0.2 [2,] 0.0 1.0 0.1 0.0 0.0 [3,] 0.5 0.1 1.0 0.3 0.7 [4,] -0.3 0.0 0.3 1.0 0.4 [5,] 0.2 0.0 0.7 0.4 1.0
isPosDef(m)
[1] TRUE You might want to look at prior postings by people more knowledgeable than me: http://finzi.psych.upenn.edu/R/Rhelp02/archive/57794.html Or look at what are probably better solutions in available packages: http://finzi.psych.upenn.edu/R/library/corpcor/html/rank.condition.html http://finzi.psych.upenn.edu/R/library/matrixcalc/html/is.positive.definit e.html -- David.
this is the matrix that I know is positive definite.
eigen(m)
$values
[1] 2.0654025 1.3391291 1.0027378 0.3956079 0.1971228
$vectors
[,1] [,2] [,3] [,4] [,5]
[1,] -0.32843233 0.69840166 0.080549876 0.44379474 0.44824689
[2,] -0.06080335 0.03564769 -0.993062427 -0.01474690 0.09296096
[3,] -0.64780034 0.12089168 -0.027187620 0.08912912 -0.74636235
[4,] -0.31765040 -0.68827876 0.007856812 0.60775962 0.23651023
[5,] -0.60653780 -0.15040584 0.080856897 -0.65231358 0.42123526
and this are the eigenvalues and eigenvectors.
I thought of using
eigen(m,only.values=T)
$values
[1] 2.0654025 1.3391291 1.0027378 0.3956079 0.1971228
$vectors
NULL
m<- matrix(scan(textConnection("
1.0 0.0 0.5 -0.3 0.2 0.0 1.0 0.1 0.0 0.0 0.5 0.1 1.0 0.3 0.7 -0.3 0.0 0.3 1.0 0.4 0.2 0.0 0.7 0.4 1.0 ")), 5, byrow=TRUE) #Read 25 items
m
[,1] [,2] [,3] [,4] [,5] [1,] 1.0 0.0 0.5 -0.3 0.2 [2,] 0.0 1.0 0.1 0.0 0.0 [3,] 0.5 0.1 1.0 0.3 0.7 [4,] -0.3 0.0 0.3 1.0 0.4 [5,] 0.2 0.0 0.7 0.4 1.0 all( eigen(m)$values>0 ) #[1] TRUE
Then i thought of using logical expression to determine if there are negative eigenvalues but couldnt work. I dont know what error this is b<-(a<0) Error: (list) object cannot be coerced to type 'double'
??? where did "a" and "b" come from?
-- Brian D. Ripley, ripley at stats.ox.ac.uk Professor of Applied Statistics, http://www.stats.ox.ac.uk/~ripley/ University of Oxford, Tel: +44 1865 272861 (self) 1 South Parks Road, +44 1865 272866 (PA) Oxford OX1 3TG, UK Fax: +44 1865 272595
David Winsemius, MD West Hartford, CT
-- Brian D. Ripley, ripley at stats.ox.ac.uk Professor of Applied Statistics, http://www.stats.ox.ac.uk/~ripley/ University of Oxford, Tel: +44 1865 272861 (self) 1 South Parks Road, +44 1865 272866 (PA) Oxford OX1 3TG, UK Fax: +44 1865 272595
Hi, Martin: Thank you! (not only for your responses in this email thread but in helping create R generally and many of these functions in particular.) Spencer
On 1/31/2011 12:10 AM, Martin Maechler wrote:
> I think the bottom line can be summarized as
> follows:
> 1. Give up on Cholesky factors unless you have a
> matrix you know must be symmetric and strictly positive
> definite. (I seem to recall having had problems with chol
> even with matrices that were theoretically positive or
> nonnegative definite but were not because of round off.
> However, I can't produce an example right now, so I'm not
> sure of that.)
(other respondents to this thread mentioned such scenarios, they are not at all uncommon)
> 2. If you must test whether a matrix is
> summetric, try all.equal(A, t(A)). From the discussion, I
> had the impression that this might not always do what you
> want, but it should be better than all(A==t(A)). It is
> better still to decide from theory whether the matrix
> should be symmetric.
Hmm, yes: Exactly for this reason, R has had a *generic* function
isSymmetric()
-------------
for quite a while:
In "base R" it uses all.equal() {but with tightened default tolerance},
but e.g., in the Matrix package,
it "decides from theory" --- the Matrix package containing
quite a few Matrix classes that are symmetric "by definition".
So my recommendation really is to use isSymmetric().
> 3. Work with the Ae = eigen(A,
> symmetric=TRUE) or eigen((A+t(A))/2, symmetric=TRUE).
> From here, Ae$values<- pmax(Ae$values, 0) ensures that A
> will be positive semidefinite (aka nonnegative definite).
> If it must be positive definite, use Ae$values<-
> pmax(Ae$values, eps), with eps>0 chosen to make it as
> positive definite as you want.
hmm, almost: The above trick has been the origin and basic building block of posdefify() from the sfsmisc package, mentioned earlier in this thread. But I have mentioned that there are much better algorithms nowadays, and Matrix::nearPD() uses one of them .. actually with variations on the theme aka optional arguments.
> 4. To the maximum extent feasible, work with
> Ae, not A. Prof. Ripley noted, "You can then work with
> [this] factorization to ensure that (for example)
> variances are always non-negative because they are always
> computed as sums of squares. This sort of thing is done
> in many of the multivariate analysis calculations in R
> (e.g. cmdscale) and in well-designed packages."
yes, or---as mentioned by Prof Ripley as well---compute a
square root of the matrix {e.g. via the eigen() decomposition
with modified eigenvalues} and work with that.
Unfortunately, in quite a few situations you just need a
pos.def. matrix to be passed to another R function as
cov / cor matrix, and their, nearPD() comes very handy.
> Hope this helps. Spencer
It did, thank you, Martin
> On 1/30/2011 3:02 AM, Alex Smith wrote:
>> Thank you for all your input but I'm afraid I dont know
>> what the final conclusion is. I will have to check the
>> the eigenvalues if any are negative. Why would setting
>> them to zero make a difference? Sorry to drag this on.
>>
>> Thanks
>>
>> On Sat, Jan 29, 2011 at 9:00 PM, Prof Brian
>> Ripley<ripley at stats.ox.ac.uk>wrote:
>>
>>> On Sat, 29 Jan 2011, David Winsemius wrote:
>>>
>>>
>>>> On Jan 29, 2011, at 12:17 PM, Prof Brian Ripley wrote:
>>>>
>>>> On Sat, 29 Jan 2011, David Winsemius wrote:
>>>>> On Jan 29, 2011, at 10:11 AM, David Winsemius wrote:
>>>>>>
On Jan 29, 2011, at 9:59 AM, John Fox wrote:
>>>>>>>> Dear David and Alex, I'd be a little careful about
>>>>>>>> testing exact equality as in all(M == t(M) and
>>>>>>>> careful as well about a test such as
>>>>>>>> all(eigen(M)$values> 0) since real arithmetic on a
>>>>>>>> computer can't be counted on to be exact.
>>>>>>>>
>>>>>>> Which was why I pointed to that thread from 2005 and
>>>>>>> the existing work that had been put into
>>>>>>> packages. If you want to substitute all.equal for
>>>>>>> all, there might be fewer numerical false alarms,
>>>>>>> but I would think there could be other potential
>>>>>>> problems that might deserve warnings.
>>>>>>>
In addition to the two "is." functions cited earlier there
>>>>>>> is also a
"posdefify" function by Maechler in the sfsmisc package:"
>>>>>>> Description :
From a matrix m, construct a "close" positive definite
>>>>>>> one."
>>>>>>
>>>>> But again, that is not usually what you want. There
>>>>> is no guarantee that the result is positive-definite
>>>>> enough that the Cholesky decomposition will work.
>>>>>
>>>> I don't see a Cholesky decomposition method being used
>>>> in that function. It appears to my reading to be
>>>> following what would be called an eigendecomposition.
>>>>
>>> Correct, but my point is that one does not usually want
>>> a
>>>
>>> "close" positive definite one
>>>
>>> but a 'square root'.
>>>
>>>
>>>
>>>> --
David. Give up on Cholesky factors unless you have a matrix you know must be
symmetric and strictly positive definite, and use the eigendecomposition instead (setting negative eigenvalues to zero). You can then work with the factorization to ensure that (for example) variances are always non-negative because they are always computed as sums of squares. This sort of thing is done in many of the multivariate analysis calculations in R (e.g. cmdscale) and in well-designed packages.
-- David.
On Jan 29, 2011, at 7:58 AM, David Winsemius wrote:
On Jan 29, 2011, at 7:22 AM, Alex Smith wrote:
Hello I am trying to determine wether a given matrix is symmetric and positive matrix. The matrix has real valued elements. I have been reading about the cholesky method and another method is to find the eigenvalues. I cant understand how to implement either of the two. Can someone point me to the right direction. I have used ?chol to see the help but if the matrix is not positive definite it comes up as error. I know how to the get the eigenvalues but how can I then put this into a program to check them as the just come up with $values. Is checking that the eigenvalues are positive enough to determine wether the matrix is positive definite?
That is a fairly simple linear algebra fact that googling or pulling out a standard reference should have confirmed.
Just to be clear (since on the basis of some off-line communications
it
did not seem to be clear): A real, symmetric matrix is Hermitian
(and
therefore all of its eigenvalues are real). Further, it is positive-
definite if and only if its eigenvalues are all positive.
qwe<-c(2,-1,0,-1,2,-1,0,1,2)
q<-matrix(qwe,nrow=3)
isPosDef<- function(M) { if ( all(M == t(M) ) ) { # first test
symmetric-ity
if ( all(eigen(M)$values> 0) ) {TRUE}
else {FALSE} } #
else {FALSE} # not symmetric
}
isPosDef(q)
[1] FALSE
m
[,1] [,2] [,3] [,4] [,5] [1,] 1.0 0.0 0.5 -0.3 0.2 [2,] 0.0 1.0 0.1 0.0 0.0 [3,] 0.5 0.1 1.0 0.3 0.7 [4,] -0.3 0.0 0.3 1.0 0.4 [5,] 0.2 0.0 0.7 0.4 1.0
isPosDef(m)
[1] TRUE You might want to look at prior postings by people more knowledgeable than me: http://finzi.psych.upenn.edu/R/Rhelp02/archive/57794.html Or look at what are probably better solutions in available packages: http://finzi.psych.upenn.edu/R/library/corpcor/html/rank.condition.html http://finzi.psych.upenn.edu/R/library/matrixcalc/html/is.positive.definit e.html -- David.
this is the matrix that I know is positive definite.
eigen(m)
$values
[1] 2.0654025 1.3391291 1.0027378 0.3956079 0.1971228
$vectors
[,1] [,2] [,3] [,4] [,5]
[1,] -0.32843233 0.69840166 0.080549876 0.44379474 0.44824689
[2,] -0.06080335 0.03564769 -0.993062427 -0.01474690 0.09296096
[3,] -0.64780034 0.12089168 -0.027187620 0.08912912 -0.74636235
[4,] -0.31765040 -0.68827876 0.007856812 0.60775962 0.23651023
[5,] -0.60653780 -0.15040584 0.080856897 -0.65231358 0.42123526
and this are the eigenvalues and eigenvectors.
I thought of using
eigen(m,only.values=T)
$values
[1] 2.0654025 1.3391291 1.0027378 0.3956079 0.1971228
$vectors
NULL
m<- matrix(scan(textConnection("
1.0 0.0 0.5 -0.3 0.2 0.0 1.0 0.1 0.0 0.0 0.5 0.1 1.0 0.3 0.7 -0.3 0.0 0.3 1.0 0.4 0.2 0.0 0.7 0.4 1.0 ")), 5, byrow=TRUE) #Read 25 items
m
[,1] [,2] [,3] [,4] [,5] [1,] 1.0 0.0 0.5 -0.3 0.2 [2,] 0.0 1.0 0.1 0.0 0.0 [3,] 0.5 0.1 1.0 0.3 0.7 [4,] -0.3 0.0 0.3 1.0 0.4 [5,] 0.2 0.0 0.7 0.4 1.0 all( eigen(m)$values>0 ) #[1] TRUE
Then i thought of using logical expression to determine if there are negative eigenvalues but couldnt work. I dont know what error this is b<-(a<0) Error: (list) object cannot be coerced to type 'double'
??? where did "a" and "b" come from?
-- Brian D. Ripley, ripley at stats.ox.ac.uk Professor of Applied Statistics, http://www.stats.ox.ac.uk/~ripley/ University of Oxford, Tel: +44 1865 272861 (self) 1 South Parks Road, +44 1865 272866 (PA) Oxford OX1 3TG, UK Fax: +44 1865 272595
David Winsemius, MD West Hartford, CT
-- Brian D. Ripley, ripley at stats.ox.ac.uk Professor of Applied Statistics, http://www.stats.ox.ac.uk/~ripley/ University of Oxford, Tel: +44 1865 272861 (self) 1 South Parks Road, +44 1865 272866 (PA) Oxford OX1 3TG, UK Fax: +44 1865 272595
Spencer Graves, PE, PhD President and Chief Operating Officer Structure Inspection and Monitoring, Inc. 751 Emerson Ct. San Jos?, CA 95126 ph: 408-655-4567