Solving the Multi-Way Matching Problem by Permutation Synchronization

Solving the Multi-Way Matching Problem by Permutation Synchronization

<p>Solving the multi-way matching problem by permutation synchronization </p><p>Deepti Pachauri<sup style="top: -0.3012em;">† </sup>Risi Kondor<sup style="top: -0.3012em;">§ </sup>Vikas Singh<sup style="top: -0.3012em;">† </sup></p><p><sup style="top: -0.3012em;">†</sup>University of Wisconsin Madison&nbsp;<sup style="top: -0.3012em;">§</sup>The University of Chicago </p><p>[email protected] [email protected] [email protected] <a href="/goto?url=http://pages.cs.wisc.edu/~pachauri/perm-sync" target="_blank">http://pages.cs.wisc.edu/</a><a href="/goto?url=http://pages.cs.wisc.edu/~pachauri/perm-sync" target="_blank">∼</a><a href="/goto?url=http://pages.cs.wisc.edu/~pachauri/perm-sync" target="_blank">pachauri/perm-sync </a></p><p>Abstract </p><p>The problem of matching not just two, but m different sets of objects to each other arises in many contexts, including finding the correspondence between feature points across multiple images in computer vision. At present it is usually solved by matching the sets pairwise, in series.&nbsp;In contrast, we propose a new method, Permutation Synchronization, which finds all the matchings jointly, in one shot, via a relaxation to eigenvector decomposition.&nbsp;The resulting algorithm is both computationally efficient, and, as we demonstrate with theoretical arguments as well as experimental results, much more stable to noise than previous methods. </p><p>1 Introduction </p><p>Finding the correct bijection between two sets of objects X = {x<sub style="top: 0.1246em;">1</sub>, x<sub style="top: 0.1246em;">2</sub>, . . . , x<sub style="top: 0.1246em;">n</sub>} and X<sup style="top: -0.3012em;">0 </sup></p><p>=</p><p>{x<sup style="top: -0.3012em;">0</sup><sub style="top: 0.2061em;">1</sub>, x<sub style="top: 0.2061em;">2</sub><sup style="top: -0.3012em;">0 </sup>, . . . , x<sup style="top: -0.3012em;">0</sup><sub style="top: 0.2053em;">n</sub>} is a fundametal problem in computer science, arising in a wide range of contexts [1]. In this paper, we consider its generalization to matching not just two, but m different sets X<sub style="top: 0.1246em;">1</sub>, X<sub style="top: 0.1246em;">2</sub>, . . . , X<sub style="top: 0.1246em;">m</sub>. Our primary motivation and running example is the classic problem of matching landmarks (feature points) across many images of the same object in computer vision, which is a key ingredient of image registration [2], recognition [3, 4], stereo [5], shape matching [6, 7], and structure from motion (SFM) [8, 9]. However, our approach is fully general and equally applicable to problems such as matching multiple graphs [10, 11]. </p><p>Presently, multi-matching is usually solved sequentially, by first finding a putative permutation τ<sub style="top: 0.1246em;">12 </sub>matching X<sub style="top: 0.1246em;">1 </sub>to X<sub style="top: 0.1246em;">2</sub>, then a permutation τ<sub style="top: 0.1246em;">23 </sub>matching X<sub style="top: 0.1246em;">2 </sub>to X<sub style="top: 0.1246em;">3</sub>, and so on, up to τ<sub style="top: 0.1246em;">m−1,m</sub>. While one can conceive of various strategies for optimizing this process, the fact remains that when the data are noisy, a single error in the sequence will typically create a large number of erroneous pairwise matches [12, 13, 14].&nbsp;In contrast, in this paper we describe a new method, Permutation Synchronization, that estimates the entire matrix (τ<sub style="top: 0.1245em;">ji</sub>)<sup style="top: -0.3013em;">m</sup><sub style="top: 0.2161em;">i,j=1 </sub>of assignments jointly, in a single shot, and is therefore much more robust to noise. </p><p>For consistency, the recovered matchings must satisfy τ<sub style="top: 0.1246em;">kj</sub>τ<sub style="top: 0.1246em;">ji </sub>= τ<sub style="top: 0.1246em;">ki</sub>. While finding an optimal matrix of permutations satisfying these relations is, in general, combinatorially hard, we show that for the most natural choice of loss function the problem has a natural relaxation to just finding the n leading eigenvectors of the cost matrix. In addition to vastly reducing the computational cost, using recent results from random matrix theory, we show that the eigenvectors are very effective at aggregating </p><p>ꢀ&nbsp;ꢁ </p><p>m</p><p>2</p><p></p><ul style="display: flex;"><li style="flex:1">information from all </li><li style="flex:1">pairwise matches, and therefore make the algorithm surprisingly robust to </li></ul><p>noise. Our experiments show that in landmark matching problems Permutation Synchronization can recover the correct correspondence between landmarks across a large number of images with small error, even when a significant fraction of the pairwise matches are incorrect. </p><p>The term “synchronization” is inspired by the recent celebrated work of Singer et al. on a similar problem involving finding the right rotations (rather than matchings) between electron microscopic </p><p>1images [15][16][17].&nbsp;To the best of our knowledge, the present work is the first in which the synchronization framework is applied to a discrete group. </p><p>2 Synchronizing&nbsp;permutations </p><p>Consider a collection of m sets X<sub style="top: 0.1245em;">1</sub>, X<sub style="top: 0.1245em;">2</sub>, . . . , X<sub style="top: 0.1245em;">m </sub>of n objects each, X<sub style="top: 0.1245em;">i </sub>= {x<sup style="top: -0.3013em;">i</sup><sub style="top: 0.2061em;">1</sub>, x<sub style="top: 0.2061em;">2</sub><sup style="top: -0.3013em;">i </sup>, . . . , x<sub style="top: 0.2052em;">n</sub><sup style="top: -0.3013em;">i </sup>}, such that for each pair (X<sub style="top: 0.1245em;">i</sub>, X<sub style="top: 0.1245em;">j</sub>), each x<sub style="top: 0.1245em;">p</sub><sup style="top: -0.3013em;">i </sup>in X<sub style="top: 0.1245em;">i </sub>has a natural counterpart x<sup style="top: -0.3013em;">j</sup><sub style="top: 0.1245em;">q </sub>in X<sub style="top: 0.1245em;">j</sub>. For&nbsp;example, in computer vision, given m images of the same scene taken from different viewpoints, x<sup style="top: -0.3012em;">i</sup><sub style="top: 0.2061em;">1</sub>, x<sub style="top: 0.2061em;">2</sub><sup style="top: -0.3012em;">i </sup>, . . . , x<sub style="top: 0.2053em;">n</sub><sup style="top: -0.3012em;">i </sup>might be n visual landmarks detected in image i, while x<sup style="top: -0.399em;">j</sup><sub style="top: 0.2213em;">1</sub>, x<sub style="top: 0.2213em;">2</sub><sup style="top: -0.399em;">j </sup>, . . . , x<sub style="top: 0.2052em;">n</sub><sup style="top: -0.3013em;">j </sup>are n landmarks detected in image j, in which case x<sub style="top: 0.1245em;">p</sub><sup style="top: -0.3013em;">i </sup>∼ x<sup style="top: -0.3013em;">j</sup><sub style="top: 0.1245em;">q </sub>signifies that x<sub style="top: 0.1245em;">p</sub><sup style="top: -0.3013em;">i </sup>and x<sup style="top: -0.3014em;">j</sup><sub style="top: 0.1245em;">q </sub>correspond to the same physical feature. </p><p>Since the correspondence between X<sub style="top: 0.1245em;">i </sub>and X<sub style="top: 0.1245em;">j </sub>is a bijection, one can write it as x<sub style="top: 0.1245em;">p</sub><sup style="top: -0.3013em;">i </sup>∼ x<sup style="top: -0.3013em;">j </sup><sub style="top: 0.1499em;">(p) </sub>for some </p><p>τ<sub style="top: 0.083em;">ji </sub></p><p>permutation τ<sub style="top: 0.1245em;">ji </sub>: {1, 2, . . . , n} → {1, 2, . . . , n}. Key to our approach to solving multi-matching is that with respect to the natural definition of multiplication, (τ<sup style="top: -0.3013em;">0</sup>τ)(i) := (τ<sup style="top: -0.3013em;">0</sup>(τ(i)), the n! possible permutations of {1, 2, . . . , n} form a group, called the symmetric group of degree n, denoted S<sub style="top: 0.1246em;">n</sub>. </p><p>We say that the system of correspondences between X<sub style="top: 0.1245em;">1</sub>, X<sub style="top: 0.1245em;">2</sub>, . . . , X<sub style="top: 0.1245em;">m </sub>is consistent if x<sub style="top: 0.1245em;">p</sub><sup style="top: -0.3012em;">i </sup>∼ x<sup style="top: -0.3012em;">j</sup><sub style="top: 0.1245em;">q </sub>and x<sup style="top: -0.3012em;">j</sup><sub style="top: 0.1245em;">q </sub>∼ x<sup style="top: -0.3012em;">k</sup><sub style="top: 0.1245em;">r </sub>together imply that x<sub style="top: 0.1245em;">p</sub><sup style="top: -0.3012em;">i </sup>∼ x<sup style="top: -0.3012em;">k</sup><sub style="top: 0.1245em;">r</sub>. In&nbsp;terms of permutations this is equivalent to requiring that the array (τ<sub style="top: 0.1245em;">ij</sub>)<sup style="top: -0.3013em;">m</sup><sub style="top: 0.2161em;">i,j=1 </sub>satisfy </p><p>τ<sub style="top: 0.1245em;">kj</sub>τ<sub style="top: 0.1245em;">ji </sub>= τ<sub style="top: 0.1245em;">ki </sub></p><p>∀i, j, k. </p><p>(1) <br>Alternatively, given some reference ordering of x<sub style="top: 0.1245em;">1</sub>, x<sub style="top: 0.1245em;">2</sub>, . . . , x<sub style="top: 0.1245em;">n</sub>, we can think of each X<sub style="top: 0.1245em;">i </sub>as realizing </p><p>σ<sub style="top: 0.0831em;">i</sub>(`) </p><p>its own permutation σ<sub style="top: 0.1245em;">i </sub>(in the sense of x<sub style="top: 0.1245em;">` </sub>∼ x<sup style="top: -0.3013em;">i </sup>), and then τ<sub style="top: 0.1245em;">ji </sub>becomes </p><p>τ<sub style="top: 0.1246em;">ji </sub>= σ<sub style="top: 0.1246em;">j</sub>σ<sub style="top: 0.2314em;">i</sub><sup style="top: -0.3552em;">−1 </sup></p><p>.</p><p>(2) <br>The existence of permutations σ<sub style="top: 0.1245em;">1</sub>, σ<sub style="top: 0.1245em;">2</sub>, . . . , σ<sub style="top: 0.1245em;">m </sub>satisfying (2) is equivalent to requiring that (τ<sub style="top: 0.1245em;">ji</sub>)<sub style="top: 0.2161em;">i</sub><sup style="top: -0.3012em;">m</sup><sub style="top: 0.2161em;">,j=1 </sub>satisfy (1).&nbsp;Thus, assuming consistency, solving the multi-matching problem reduces to finding just m different permutations, rather than O(m<sup style="top: -0.3013em;">2</sup>). However,&nbsp;the σ<sub style="top: 0.1245em;">i</sub>’s are of course not directly observable. Rather, in a typical application we have some tentative (noisy) τ˜ matchings which we </p><p>ji </p><p>must synchronize into the form (2) by finding the underlying σ<sub style="top: 0.1245em;">1</sub>, . . . , σ<sub style="top: 0.1245em;">m</sub>. </p><ul style="display: flex;"><li style="flex:1">Given (τ˜ )<sup style="top: -0.3012em;">m </sup></li><li style="flex:1">and some appropriate distance metric d between permutations, we formalize Per- </li></ul><p></p><p>ji i,j=1 </p><p>mutation Synchronization as the combinatorial optimization problem </p><p>N</p><p>X</p><p>minimiz<sub style="top: 0.5483em;">∈</sub>e<sub style="top: 0.5483em;">S </sub></p><p>d(σ<sub style="top: 0.1245em;">j</sub>σ<sup style="top: -0.3552em;">−1</sup>, τ˜ ). </p><p>(3) </p><p>ji iσ<sub style="top: 0.083em;">1</sub>,σ<sub style="top: 0.083em;">2</sub>,...,σ<sub style="top: 0.083em;">m </sub></p><p>n</p><p>i,j=1 </p><p>The computational cost of solving (3) depends critically on the form of the distance metric d. In this paper we limit ourselves to the simplest choice </p><p>d(σ, τ) = n − hP(σ), P(τ)i , </p><p>(4) where P(σ) ∈ R<sup style="top: -0.3012em;">n×n </sup>are the usual permutation matrices </p><p>ꢂ</p><p>10if σ(p) = q </p><p>otherwise, </p><p>P</p><p>[P(σ)]<sub style="top: 0.1246em;">q,p </sub>:= </p><p>n</p><p>and hA, Bi is the matrix inner product hA, Bi := tr(A<sup style="top: -0.3013em;">&gt;</sup>B) =&nbsp;<sub style="top: 0.249em;">p,q=1 </sub>A<sub style="top: 0.1245em;">p,q </sub>B<sub style="top: 0.1245em;">p,q </sub></p><p>.</p><p>The distance (4) simply counts the number of objects assigned differently by σ and τ. Further- </p><p>P</p><p>m</p><p></p><ul style="display: flex;"><li style="flex:1">more, it allows us to rewrite (3) as maximize<sub style="top: 0.1245em;">σ ,σ ,...,σ </sub></li><li style="flex:1">hP(σ<sub style="top: 0.1245em;">j</sub>σ<sup style="top: -0.3552em;">−1</sup>), P(τ˜ )i, suggesting the </li></ul><p></p><p>ji </p><p></p><ul style="display: flex;"><li style="flex:1">1</li><li style="flex:1">2</li></ul><p></p><p>m</p><p></p><ul style="display: flex;"><li style="flex:1">i</li><li style="flex:1">i,j=1 </li></ul><p></p><p>generalization </p><p>m</p><p>X</p><ul style="display: flex;"><li style="flex:1">ꢃ</li><li style="flex:1">ꢄ</li></ul><p></p><p>maximize </p><p></p><ul style="display: flex;"><li style="flex:1">P(σ<sub style="top: 0.1246em;">j</sub>σ<sub style="top: 0.2314em;">i</sub><sup style="top: -0.3551em;">−1</sup>), T<sub style="top: 0.1246em;">ji </sub></li><li style="flex:1">,</li></ul><p></p><p>(5) </p><p>σ<sub style="top: 0.083em;">1</sub>,σ<sub style="top: 0.083em;">2</sub>,...,σ<sub style="top: 0.083em;">m </sub>i,j=1 </p><p>where the T<sub style="top: 0.1245em;">ji</sub>’s can now be any matrices, subject to T<sub style="top: 0.2161em;">j</sub><sup style="top: -0.3012em;">&gt;</sup><sub style="top: 0.2161em;">i </sub>= T<sub style="top: 0.1245em;">ij</sub>. Intuitively, each T<sub style="top: 0.1245em;">ji </sub>is an objective matrix, the (q, p) element of which captures the utility of matching x<sub style="top: 0.1246em;">p</sub><sup style="top: -0.3012em;">i </sup>in X<sub style="top: 0.1246em;">i </sub>to x<sup style="top: -0.3012em;">j</sup><sub style="top: 0.1246em;">q </sub>in X<sub style="top: 0.1246em;">j</sub>. This generalization is very useful when the assignments of the different x<sub style="top: 0.1245em;">p</sub><sup style="top: -0.3012em;">i</sup>’s have different confidences. For example, in the landmark matching case, if, due to occlusion or for some other reason, the counterpart of x<sub style="top: 0.1245em;">p</sub><sup style="top: -0.3013em;">i </sup>is not present in X<sub style="top: 0.1245em;">j</sub>, then we can simply set [T<sub style="top: 0.1245em;">ji</sub>]<sub style="top: 0.1245em;">q,p </sub>= 0 for all q. </p><p>2</p><p>2.1 Representations&nbsp;and eigenvectors </p><p>The generalized Permutation Synchronization problem (5) can also be written as </p><ul style="display: flex;"><li style="flex:1">maximize hP, T i , </li><li style="flex:1">(6) </li></ul><p>(7) </p><p>σ<sub style="top: 0.0831em;">1</sub>,σ<sub style="top: 0.0831em;">2</sub>,...,σ<sub style="top: 0.0831em;">m </sub></p><p>where </p><p></p><ul style="display: flex;"><li style="flex:1"></li><li style="flex:1"></li><li style="flex:1"></li><li style="flex:1"></li></ul><p></p><p>P(σ<sub style="top: 0.1246em;">1</sub>σ<sub style="top: 0.2214em;">1</sub><sup style="top: -0.3552em;">−1 </sup></p><p>)</p><p>. . .&nbsp;P(σ<sub style="top: 0.1246em;">1</sub>σ<sub style="top: 0.2053em;">m</sub><sup style="top: -0.3012em;">−1 </sup></p><p>)</p><p>T<sub style="top: 0.1246em;">11 </sub>. . .&nbsp;T<sub style="top: 0.1246em;">1m </sub></p><p><br><br><br></p><p>...<br>...<br>...<br>...</p><ul style="display: flex;"><li style="flex:1">.</li><li style="flex:1">.</li></ul><p></p><ul style="display: flex;"><li style="flex:1">.</li><li style="flex:1">.</li></ul><p></p><p>P = </p><p>and </p><p>T = </p><p>.</p><p></p><ul style="display: flex;"><li style="flex:1">.</li><li style="flex:1">.</li></ul><p></p><p>P(σ<sub style="top: 0.1245em;">m</sub>σ<sub style="top: 0.2213em;">1</sub><sup style="top: -0.3552em;">−1 </sup></p><p>)</p><p>. . .&nbsp;P(σ<sub style="top: 0.1245em;">m</sub>σ<sub style="top: 0.2052em;">m</sub><sup style="top: -0.3013em;">−1 </sup></p><p>)</p><p>T<sub style="top: 0.1245em;">m1 </sub>. . .&nbsp;T<sub style="top: 0.1245em;">mm </sub></p><p>A matrix valued function ρ: S<sub style="top: 0.1245em;">n </sub>→ C<sup style="top: -0.3012em;">d×d </sup>is said to be a representation of the symmetric group if ρ(σ<sub style="top: 0.1245em;">2</sub>)ρ(σ<sub style="top: 0.1245em;">1</sub>) = ρ(σ<sub style="top: 0.1245em;">2</sub>σ<sub style="top: 0.1245em;">1</sub>) for any pair of permutations σ<sub style="top: 0.1245em;">1</sub>, σ<sub style="top: 0.1245em;">2 </sub>∈ S<sub style="top: 0.1245em;">n</sub>. Clearly,&nbsp;P is a representation of S<sub style="top: 0.1245em;">n </sub>(actually, the so-called defining representation), since P(σ<sub style="top: 0.1245em;">2</sub>σ<sub style="top: 0.1245em;">1</sub>) = P(σ<sub style="top: 0.1245em;">2</sub>)P(σ<sub style="top: 0.1245em;">1</sub>). Moreover, P is a so-called orthogonal representation, because each P(σ) is real and P(σ<sup style="top: -0.3013em;">−1</sup>) = P(σ)<sup style="top: -0.3013em;">&gt;</sup>. Our fundamental observation is that this implies that P has a very special form. </p><p>Proposition 1.&nbsp;The synchronization matrix P is of rank n and is of the form P = U ·U<sup style="top: -0.3012em;">&gt;</sup>, where </p><p></p><ul style="display: flex;"><li style="flex:1"></li><li style="flex:1"></li></ul><p></p><p>P(σ<sub style="top: 0.1245em;">1</sub>) </p><p><br></p><p>...</p><p></p><ul style="display: flex;"><li style="flex:1">U = </li><li style="flex:1">.</li></ul><p></p><p>P(σ<sub style="top: 0.1245em;">m</sub>) </p><p>Proof. From P being a representation of S<sub style="top: 0.1245em;">n</sub>, </p><p></p><ul style="display: flex;"><li style="flex:1"></li><li style="flex:1"></li></ul><p></p><p>P(σ<sub style="top: 0.1246em;">1</sub>)P(σ<sub style="top: 0.1246em;">1</sub>)<sup style="top: -0.3012em;">&gt; </sup>. . .&nbsp;P(σ<sub style="top: 0.1246em;">1</sub>)P(σ<sub style="top: 0.1246em;">m</sub>)<sup style="top: -0.3012em;">&gt; </sup></p><p><br></p><p>...<br>...<br>.<br>.</p><p>P = </p><p>,</p><p>(8) <br>.</p><p>P(σ<sub style="top: 0.1245em;">m</sub>)P(σ<sub style="top: 0.1245em;">1</sub>)<sup style="top: -0.3013em;">&gt; </sup>. . .&nbsp;P(σ<sub style="top: 0.1245em;">m</sub>)P(σ<sub style="top: 0.1245em;">m</sub>)<sup style="top: -0.3013em;">&gt; </sup></p><p>implying P = U · U<sup style="top: -0.3012em;">&gt;</sup>. Since U has n columns, rank(P) is at most n. This rank is achieved because P(σ<sub style="top: 0.1245em;">1</sub>) is an orthogonal matrix, therefore it has linearly independent columns, and consequently the columns of U cannot be linearly dependent. </p><p>ꢀ</p><p>Corollary 1.&nbsp;Letting [P(σ<sub style="top: 0.1246em;">i</sub>)]<sub style="top: 0.1246em;">p </sub>denote the p’th column of P(σ<sub style="top: 0.1246em;">i</sub>), the normalized columns of U, </p><p></p><ul style="display: flex;"><li style="flex:1"></li><li style="flex:1"></li></ul><p></p><p>[P(σ<sub style="top: 0.1245em;">1</sub>)]<sub style="top: 0.1245em;">` </sub><br>1</p><p><br></p><p>...</p><p>√</p><p>u<sub style="top: 0.1245em;">` </sub>= </p><p>` = 1, . . . , n, </p><p>(9) </p><p>m</p><p>[P(σ<sub style="top: 0.1246em;">m</sub>)]<sub style="top: 0.1246em;">` </sub></p><p>are mutually orthogonal unit eigenvectors of P with the same eigenvalue m, and together span the row/column space of P. </p><p>Proof. The columns of U are orthogonal because the columns of each constituent P(σ<sub style="top: 0.1245em;">i</sub>) are orthogonal. The&nbsp;normalization follows from each column of P(σ<sub style="top: 0.1246em;">i</sub>) having norm 1. The&nbsp;rest follows by Proposition 1. </p><p>ꢀ</p><p>2.2 An&nbsp;easy relaxation </p><p>Solving (6) is computationally difficult, because it involves searching the combinatorial space of a combination of m permutations. However, Proposition 1 and its corollary suggest relaxing it to </p><p></p><ul style="display: flex;"><li style="flex:1">maximize hP, T i , </li><li style="flex:1">(10) </li></ul><p></p><p>P∈M<sup style="top: -0.166em;">n</sup><sub style="top: 0.1661em;">m </sub></p><p>where M<sup style="top: -0.3013em;">m</sup><sub style="top: 0.2053em;">n </sub>is the set of mn–dimensional rank n symmetric matrices whose non-zero eigenvalues are m. This is now just a generalized Rayleigh problem, the solution of which is simply </p><p>n</p><p>X</p><p>P = m </p><p>v<sub style="top: 0.1245em;">` </sub>v<sub style="top: 0.2052em;">`</sub><sup style="top: -0.3428em;">&gt;</sup>, </p><p>(11) </p><p>`=1 </p><p>3where v<sub style="top: 0.1246em;">1</sub>, v<sub style="top: 0.1246em;">2</sub>, . . . , v<sub style="top: 0.1246em;">` </sub>are the n leading normalized eigenvectors of T . Equivalently,&nbsp;P = U · U<sup style="top: -0.3012em;">&gt;</sup>, where </p><p></p><ul style="display: flex;"><li style="flex:1">&nbsp;</li><li style="flex:1">!</li></ul><p></p><p></p><ul style="display: flex;"><li style="flex:1">|</li><li style="flex:1">|</li></ul><p></p><p>. . . v<sub style="top: 0.1246em;">1 </sub>v<sub style="top: 0.1246em;">2 </sub>. . .&nbsp;v<sub style="top: 0.1246em;">n </sub></p><p>|<br>√</p><p></p><ul style="display: flex;"><li style="flex:1">U = </li><li style="flex:1">m</li><li style="flex:1">.</li></ul><p></p><p>(12) </p><p></p><ul style="display: flex;"><li style="flex:1">|</li><li style="flex:1">|</li></ul><p></p><p>. . . </p><p>|</p><p>Thus, in contrast to the original combinatorial problem, (10) can be solved by just finding the m leading eigenvectors of T . </p><p>Of course, from P we must still recover the in- <br>Algorithm 1 Permutation Synchronization dividual permutations σ<sub style="top: 0.1246em;">1</sub>, σ<sub style="top: 0.1246em;">2</sub>, . . . , σ<sub style="top: 0.1246em;">m</sub>. How- </p><p>Input: the objective matrix T </p><p>ever, as long as P is relatively close in form (7), this is quite a simple and stable process. One way to do it is to let each σ<sub style="top: 0.1246em;">i </sub>be the permutation that best matches the (i, 1) block of P in the linear assignment sense, </p><p>Compute the n leading eigenvectors (v<sub style="top: 0.083em;">1</sub>, v<sub style="top: 0.083em;">2</sub>, . . . , v<sub style="top: 0.083em;">n</sub>) </p><p>√</p><p>of T and set U = m [v<sub style="top: 0.0831em;">1</sub>, v<sub style="top: 0.0831em;">2</sub>, . . . , v<sub style="top: 0.0831em;">n</sub>] </p><p>for i = 1 to m do </p><p>P<sub style="top: 0.083em;">i1 </sub>= U((i−1)n+1 : in, 1 : n) U(1 : n, 1 : n)<sup style="top: -0.3174em;">&gt; </sup></p><p>σ<sub style="top: 0.0831em;">i </sub>= arg max<sub style="top: 0.0858em;">σ∈S </sub>hP<sub style="top: 0.0831em;">i1</sub>, σi [Kuhn-Munkres] </p><p>n</p><p>end for </p><p>σ<sub style="top: 0.1246em;">i </sub>= arg min hP(σ), [P]<sub style="top: 0.1246em;">i,1</sub>i , </p><p>for each (i, j) do </p><p>σ∈S<sub style="top: 0.083em;">n </sub></p><p>τ<sub style="top: 0.083em;">ji </sub>= σ<sub style="top: 0.083em;">j</sub>σ<sub style="top: 0.1988em;">i</sub><sup style="top: -0.3265em;">−1 </sup></p><p>which is solved in O(n<sup style="top: -0.3012em;">3</sup>) time by the Kuhn– </p><p>end for </p><p>Munkres algorithm [18]<sup style="top: -0.3012em;">1</sup>, and then set τ<sub style="top: 0.1246em;">ji </sub></p><p>=</p><p>Output: the matrix (τ<sub style="top: 0.0831em;">ji</sub>)<sup style="top: -0.3174em;">m</sup><sub style="top: 0.1388em;">i,j=1 </sub>of globally consistent </p><p>σ<sub style="top: 0.1245em;">j</sub>σ<sub style="top: 0.2314em;">i</sub><sup style="top: -0.3552em;">−1</sup>, which will then satisfy the consistency relations. The&nbsp;pseudocode of the full algorithm is given in Algorithm 1. </p><p>matchings </p><p>3 Analysis&nbsp;of the relaxed algorithm </p><p>Let us now investigate under what conditions we can expect the relaxation (10) to work well, in particular, in what cases we can expect the recovered matchings to be exact. </p><p>In the absence of noise, i.e., when T<sub style="top: 0.1245em;">ji </sub>= P(τ˜ )&nbsp;for some array (τ˜ )<sub style="top: 0.1245em;">j,i </sub>of hidden permutations satisfying the consistency relations (1), T will <sup style="top: -0.7888em;">j</sup>h<sup style="top: -0.7888em;">i</sup>ave precisely the sa<sup style="top: -0.7888em;">j</sup>m<sup style="top: -0.7888em;">i </sup>e structure as described by Proposition 1 for P. In particular, it will have n mutually orthogonal eigenvectors </p><p></p><ul style="display: flex;"><li style="flex:1"></li><li style="flex:1"></li></ul><p></p><p>[P(σ˜<sub style="top: 0.1245em;">1</sub>)]<sub style="top: 0.1245em;">` </sub><br>1</p>

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    9 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us