optimization problem
Dear Hans, I agree with your comments. My intuition was that the quadratic form would be better behaved than the radical form (less nonlinear!?). So, I was "hoping" to see a change in behavior when the cost function was altered from a radical (i.e. sqrt) form to quadratic, but I was still surprised to actually see it. I don't have a good explanation for this. I came up with the idea of solving Klaus' problem as an LSAP problem. I did not know that the LSAP approach to this and similar problems has already been considered by Nick Higham. I asked Nick last night about this problem thinking that he might know of more direct solutions to this problem (e.g. some kind of SVD or related factorizations). He said that I should take a look at the PhD thesis of one of his students. Take a look at Section 3.5 of the PhD thesis Parallel Solution of SVD-Related Problems, With Applications at http://www.maths.manchester.ac.uk/~higham/misc/past-students.php This thesis proposed algorithms for the current problem and different versions of it under the heading of "Procrustes-type" problems. I have to read this and get a better handle on it. However, I would not be able to get to this for another two weeks. If you have any insights from this, in the meanwhile, do share with us. Best regards, Ravi. ____________________________________________________________________ Ravi Varadhan, Ph.D. Assistant Professor, Division of Geriatric Medicine and Gerontology School of Medicine Johns Hopkins University Ph. (410) 502-2619 email: rvaradhan at jhmi.edu ----- Original Message ----- From: "Hans W. Borchers" <hwborchers at googlemail.com> Date: Sunday, January 17, 2010 3:54 am Subject: Re: [R] optimization problem To: r-help at stat.math.ethz.ch
Ravi Varadhan <rvaradhan <at> jhmi.edu> writes:
Interesting! Now, if I change the "cost matrix", D, in the LSAP formulation slightly such that it is quadratic, it finds the best solution to your example:
Dear Ravi,
I thought your solution is ingenious, but after the discussion with
Erwin Kalvelagen I found two things quite irritating:
(1) Why is solve_LSAP(D) different from solve_LSAP(D^2) in Erwin's
example? I believed just squaring the distance matrix would make
no difference to solving the LSAP - except for some numerical
instability which does not seem to be the case here.
(2) If you change rows and sums in the definition of D, that is
D[j, i] <- sqrt(sum((B[, j] - A[, i])^2))
then the solution to Erwin's example comes out right even with
keeping the square root.
Do you have explanations for these 'phenomena'? Otherwise, I think,
there will remain some doubts about this approach.
Very best
Hans Werner
pMatrix.min <- function(A, B) {
# finds the permutation P of A such that ||PA - B|| is minimum
# in Frobenius norm
# Uses the linear-sum assignment problem (LSAP) solver
# in the "clue" package
# Returns P%*%A and the permutation vector `pvec' such that
# A[pvec, ] is the permutation of A closest to B
n <- nrow(A)
D <- matrix(NA, n, n)
for (i in 1:n) {
for (j in 1:n) {
# D[j, i] <- sqrt(sum((B[j, ] - A[i, ])^2))
D[j, i] <- (sum((B[j, ] - A[i, ])^2)) # this is better
} }
vec <- c(solve_LSAP(D))
list(A=A[vec,], pvec=vec)
}
> X<-pMatrix.min(A,B) X$pvec
[1] 6 1 3 2 4 5
dist(X$A, B)
[1] 10.50172
This should be fine. Any counter-examples to this?! Best, Ravi.
____________________________________________________________________ Ravi Varadhan, Ph.D. Assistant Professor, Division of Geriatric Medicine and Gerontology School of Medicine Johns Hopkins University Ph. (410) 502-2619 email: rvaradhan <at> jhmi.edu
______________________________________________ R-help at r-project.org mailing list PLEASE do read the posting guide and provide commented, minimal, self-contained, reproducible code.