Message-ID: <728085ca71a2.4d45583f@johnshopkins.edu>
Date: 2011-01-30T17:23:27Z
From: Ravi Varadhan
Subject: Positive Definite Matrix
In-Reply-To: <4D4573A2.809@structuremonitoring.com>
Hi,
You should know about symmetry from the way in which your matrix is generated. Assuming that it is symmetric, the easiest thing would be to always use the `nearPD' function in the "Matrix" package when you suspect that your matrix could be indefinite. An important thing to keep in mind is how to define lack of positive definiteness. In `nearPD' it is the magnitude of the parameter `eig.tol', which is the absolute value of the ratio of the smallest to largest eigenvalue. The default is 1.e-06, which I think is a bit conservative, given that the machine epsilon is around 1.e-16. I would probably use 1.e-08, which is the square-root of machine epsilon.
You could still use `nearPD' if your matrix is "almost" symmetric, since nearPD will work with the symmetric part of the matrix, (A + t(A))/2.
Best,
Ravi.
____________________________________________________________________
Ravi Varadhan, Ph.D.
Assistant Professor,
Division of Geriatric Medicine and Gerontology
School of Medicine
Johns Hopkins University
Ph. (410) 502-2619
email: rvaradhan at jhmi.edu
----- Original Message -----
From: Spencer Graves <spencer.graves at structuremonitoring.com>
Date: Sunday, January 30, 2011 9:22 am
Subject: Re: [R] Positive Definite Matrix
To: Alex Smith <alex.smit4 at gmail.com>
Cc: r-help at r-project.org, John Fox <jfox at mcmaster.ca>, Prof Brian Ripley <ripley at stats.ox.ac.uk>
> I think the bottom line can be summarized as follows:
>
>
> 1. Give up on Cholesky factors unless you have a matrix
> you know must be symmetric and strictly positive definite. (I seem to
> recall having had problems with chol even with matrices that were
> theoretically positive or nonnegative definite but were not because of
> round off. However, I can't produce an example right now, so I'm not
> sure of that.)
>
>
> 2. If you must test whether a matrix is summetric, try
> all.equal(A, t(A)). From the discussion, I had the impression that
> this might not always do what you want, but it should be better than
> all(A==t(A)). It is better still to decide from theory whether the
> matrix should be symmetric.
>
>
> 3. Work with the Ae = eigen(A, symmetric=TRUE) or
> eigen((A+t(A))/2, symmetric=TRUE). From here, Ae$values <-
> pmax(Ae$values, 0) ensures that A will be positive semidefinite (aka
> nonnegative definite). If it must be positive definite, use Ae$values
> <- pmax(Ae$values, eps), with eps>0 chosen to make it as positive
> definite as you want.
>
>
> 4. To the maximum extent feasible, work with Ae, not A.
> Prof. Ripley noted, "You can then work with [this] factorization to
> ensure that (for example) variances are always non-negative because
> they are always computed as sums of squares. This sort of thing is
> done in many of the multivariate analysis calculations in R (e.g.
> cmdscale) and in well-designed packages."
>
>
> Hope this helps.
> Spencer
>
>
> On 1/30/2011 3:02 AM, Alex Smith wrote:
> >Thank you for all your input but I'm afraid I dont know what the final
> >conclusion is. I will have to check the the eigenvalues if any are negative.
> >Why would setting them to zero make a difference? Sorry to drag this
> on.
> >
> >Thanks
> >
> >On Sat, Jan 29, 2011 at 9:00 PM, Prof Brian Ripley<ripley at stats.ox.ac.uk>wrote:
> >
> >>On Sat, 29 Jan 2011, David Winsemius wrote:
> >>
> >>
> >>>On Jan 29, 2011, at 12:17 PM, Prof Brian Ripley wrote:
> >>>
> >>> On Sat, 29 Jan 2011, David Winsemius wrote:
> >>>>
> >>>>>On Jan 29, 2011, at 10:11 AM, David Winsemius wrote:
> >>>>>
> >>>>> On Jan 29, 2011, at 9:59 AM, John Fox wrote:
> >>>>>>>Dear David and Alex,
> >>>>>>>I'd be a little careful about testing exact equality as in
> all(M ==
> >>>>>>>t(M) and
> >>>>>>>careful as well about a test such as all(eigen(M)$values> 0) since
> >>>>>>>real
> >>>>>>>arithmetic on a computer can't be counted on to be exact.
> >>>>>>>
> >>>>>>Which was why I pointed to that thread from 2005 and the
> existing work
> >>>>>>that had been put into packages. If you want to substitute
> all.equal for
> >>>>>>all, there might be fewer numerical false alarms, but I would
> think there
> >>>>>>could be other potential problems that might deserve warnings.
> >>>>>>
> >>>>>In addition to the two "is." functions cited earlier there is
> also a
> >>>>>"posdefify" function by Maechler in the sfsmisc package: "
> Description :
> >>>>> From a matrix m, construct a "close" positive definite one."
> >>>>>
> >>>>But again, that is not usually what you want. There is no
> guarantee that
> >>>>the result is positive-definite enough that the Cholesky
> decomposition will
> >>>>work.
> >>>>
> >>>I don't see a Cholesky decomposition method being used in that function.
> >>>It appears to my reading to be following what would be called an
> >>>eigendecomposition.
> >>>
> >>Correct, but my point is that one does not usually want a
> >>
> >>"close" positive definite one
> >>
> >>but a 'square root'.
> >>
> >>
> >>
> >>>--
> >>>David.
> >>>
> >>>
> >>> Give up on Cholesky factors unless you have a matrix you know
> must be
> >>>>symmetric and strictly positive definite, and use the eigendecomposition
> >>>>instead (setting negative eigenvalues to zero). You can then work
> with the
> >>>>factorization to ensure that (for example) variances are always non-negative
> >>>>because they are always computed as sums of squares.
> >>>>
> >>>>This sort of thing is done in many of the multivariate analysis
> >>>>calculations in R (e.g. cmdscale) and in well-designed packages.
> >>>>
> >>>>
> >>>>>--
> >>>>>David.
> >>>>>
> >>>>>>On Jan 29, 2011, at 7:58 AM, David Winsemius wrote:
> >>>>>>>>>On Jan 29, 2011, at 7:22 AM, Alex Smith wrote:
> >>>>>>>>>
> >>>>>>>>>>Hello I am trying to determine wether a given matrix is symmetric
> >>>>>>>>>>and
> >>>>>>>>>>positive matrix. The matrix has real valued elements.
> >>>>>>>>>>I have been reading about the cholesky method and another
> method is
> >>>>>>>>>>to find the eigenvalues. I cant understand how to implement
> either
> >>>>>>>>>>of
> >>>>>>>>>>the two. Can someone point me to the right direction. I have
> used
> >>>>>>>>>>?chol to see the help but if the matrix is not positive
> definite it
> >>>>>>>>>>comes up as error. I know how to the get the eigenvalues but
> how
> >>>>>>>>>>can
> >>>>>>>>>>I then put this into a program to check them as the just
> come up
> >>>>>>>>>>with
> >>>>>>>>>>$values.
> >>>>>>>>>>Is checking that the eigenvalues are positive enough to determine
> >>>>>>>>>>wether the matrix is positive definite?
> >>>>>>>>>>
> >>>>>>>>>That is a fairly simple linear algebra fact that googling or
> pulling
> >>>>>>>>>out a standard reference should have confirmed.
> >>>>>>>>>
> >>>>>>>>Just to be clear (since on the basis of some off-line communications
> >>>>>>>>it
> >>>>>>>>did not seem to be clear): A real, symmetric matrix is Hermitian
> >>>>>>>>(and
> >>>>>>>>therefore all of its eigenvalues are real). Further, it is positive-
> >>>>>>>>definite if and only if its eigenvalues are all positive.
> >>>>>>>>qwe<-c(2,-1,0,-1,2,-1,0,1,2)
> >>>>>>>>q<-matrix(qwe,nrow=3)
> >>>>>>>>isPosDef<- function(M) { if ( all(M == t(M) ) ) { # first test
> >>>>>>>>symmetric-ity
> >>>>>>>> if ( all(eigen(M)$values> 0) ) {TRUE}
> >>>>>>>> else {FALSE} } #
> >>>>>>>> else {FALSE} # not symmetric
> >>>>>>>>
> >>>>>>>> }
> >>>>>>>>
> >>>>>>>>>isPosDef(q)
> >>>>>>>>>
> >>>>>>>>[1] FALSE
> >>>>>>>>
> >>>>>>>>>m
> >>>>>>>>>>[,1] [,2] [,3] [,4] [,5]
> >>>>>>>>>>[1,] 1.0 0.0 0.5 -0.3 0.2
> >>>>>>>>>>[2,] 0.0 1.0 0.1 0.0 0.0
> >>>>>>>>>>[3,] 0.5 0.1 1.0 0.3 0.7
> >>>>>>>>>>[4,] -0.3 0.0 0.3 1.0 0.4
> >>>>>>>>>>[5,] 0.2 0.0 0.7 0.4 1.0
> >>>>>>>>>>
> >>>>>>>>>isPosDef(m)
> >>>>>>>>>
> >>>>>>>>[1] TRUE
> >>>>>>>>You might want to look at prior postings by people more knowledgeable
> >>>>>>>>than
> >>>>>>>>me:
> >>>>>>>>
> >>>>>>>>Or look at what are probably better solutions in available packages:
> >>>>>>>>
> >>>>>>>>
> >>>>>>>>
> >>>>>>>>
> >>>>>>>>e.html
> >>>>>>>>--
> >>>>>>>>David.
> >>>>>>>>
> >>>>>>>>>this is the matrix that I know is positive definite.
> >>>>>>>>>>eigen(m)
> >>>>>>>>>>$values
> >>>>>>>>>>[1] 2.0654025 1.3391291 1.0027378 0.3956079 0.1971228
> >>>>>>>>>>$vectors
> >>>>>>>>>> [,1] [,2] [,3] [,4] [,5]
> >>>>>>>>>>[1,] -0.32843233 0.69840166 0.080549876 0.44379474 0.44824689
> >>>>>>>>>>[2,] -0.06080335 0.03564769 -0.993062427 -0.01474690 0.09296096
> >>>>>>>>>>[3,] -0.64780034 0.12089168 -0.027187620 0.08912912 -0.74636235
> >>>>>>>>>>[4,] -0.31765040 -0.68827876 0.007856812 0.60775962 0.23651023
> >>>>>>>>>>[5,] -0.60653780 -0.15040584 0.080856897 -0.65231358 0.42123526
> >>>>>>>>>>and this are the eigenvalues and eigenvectors.
> >>>>>>>>>>I thought of using
> >>>>>>>>>>eigen(m,only.values=T)
> >>>>>>>>>>$values
> >>>>>>>>>>[1] 2.0654025 1.3391291 1.0027378 0.3956079 0.1971228
> >>>>>>>>>>$vectors
> >>>>>>>>>>NULL
> >>>>>>>>>>m<- matrix(scan(textConnection("
> >>>>>>>>>>
> >>>>>>>>>1.0 0.0 0.5 -0.3 0.2
> >>>>>>>>>0.0 1.0 0.1 0.0 0.0
> >>>>>>>>>0.5 0.1 1.0 0.3 0.7
> >>>>>>>>>-0.3 0.0 0.3 1.0 0.4
> >>>>>>>>>0.2 0.0 0.7 0.4 1.0
> >>>>>>>>>")), 5, byrow=TRUE)
> >>>>>>>>>#Read 25 items
> >>>>>>>>>
> >>>>>>>>>>m
> >>>>>>>>>>
> >>>>>>>>>[,1] [,2] [,3] [,4] [,5]
> >>>>>>>>>[1,] 1.0 0.0 0.5 -0.3 0.2
> >>>>>>>>>[2,] 0.0 1.0 0.1 0.0 0.0
> >>>>>>>>>[3,] 0.5 0.1 1.0 0.3 0.7
> >>>>>>>>>[4,] -0.3 0.0 0.3 1.0 0.4
> >>>>>>>>>[5,] 0.2 0.0 0.7 0.4 1.0
> >>>>>>>>>all( eigen(m)$values>0 )
> >>>>>>>>>#[1] TRUE
> >>>>>>>>>
> >>>>>>>>>>Then i thought of using logical expression to determine if there
> >>>>>>>>>>are
> >>>>>>>>>>negative eigenvalues but couldnt work. I dont know what
> error this
> >>>>>>>>>>is
> >>>>>>>>>>b<-(a<0)
> >>>>>>>>>>Error: (list) object cannot be coerced to type 'double'
> >>>>>>>>>>
> >>>>>>>>>??? where did "a" and "b" come from?
> >>>>>>>>>
> >>>>>______________________________________________
> >>>>>R-help at r-project.org mailing list
> >>>>>
> >>>>>PLEASE do read the posting guide
> >>>>>
> >>>>>and provide commented, minimal, self-contained, reproducible code.
> >>>>>
> >>>>--
> >>>>Brian D. Ripley, ripley at stats.ox.ac.uk
> >>>>Professor of Applied Statistics,
> >>>>University of Oxford, Tel: +44 1865 272861 (self)
> >>>>1 South Parks Road, +44 1865 272866 (PA)
> >>>>Oxford OX1 3TG, UK Fax: +44 1865 272595
> >>>>
> >>>David Winsemius, MD
> >>>West Hartford, CT
> >>>
> >>--
> >>Brian D. Ripley, ripley at stats.ox.ac.uk
> >>Professor of Applied Statistics,
> >>University of Oxford, Tel: +44 1865 272861 (self)
> >>1 South Parks Road, +44 1865 272866 (PA)
> >>Oxford OX1 3TG, UK Fax: +44 1865 272595
> >>
> >>______________________________________________
> >>R-help at r-project.org mailing list
> >>
> >>PLEASE do read the posting guide
> >>
> >>and provide commented, minimal, self-contained, reproducible code.
> >>
> > [[alternative HTML version deleted]]
> >
> >______________________________________________
> >R-help at r-project.org mailing list
> >
> >PLEASE do read the posting guide
> >and provide commented, minimal, self-contained, reproducible code.
>
> ______________________________________________
> R-help at r-project.org mailing list
>
> PLEASE do read the posting guide
> and provide commented, minimal, self-contained, reproducible code.