Results 1  10
of
120
A Graduated Assignment Algorithm for Graph Matching
, 1996
"... A graduated assignment algorithm for graph matching is presented which is fast and accurate even in the presence of high noise. By combining graduated nonconvexity, twoway (assignment) constraints, and sparsity, large improvements in accuracy and speed are achieved. Its low order computational comp ..."
Abstract

Cited by 285 (15 self)
 Add to MetaCart
A graduated assignment algorithm for graph matching is presented which is fast and accurate even in the presence of high noise. By combining graduated nonconvexity, twoway (assignment) constraints, and sparsity, large improvements in accuracy and speed are achieved. Its low order computational complexity [O(lm), where l and m are the number of links in the two graphs] and robustness in the presence of noise offer advantages over traditional combinatorial approaches. The algorithm, not restricted to any special class of graph, is applied to subgraph isomorphism, weighted graph matching, and attributed relational graph matching. To illustrate the performance of the algorithm, attributed relational graphs derived from objects are matched. Then, results from twentyfive thousand experiments conducted on 100 node random graphs of varying types (graphs with only zeroone links, weighted graphs, and graphs with node attributes and multiple link types) are reported. No comparable results have...
A New Point Matching Algorithm for NonRigid Registration
, 2002
"... Featurebased methods for nonrigid registration frequently encounter the correspondence problem. Regardless of whether points, lines, curves or surface parameterizations are used, featurebased nonrigid matching requires us to automatically solve for correspondences between two sets of features. I ..."
Abstract

Cited by 235 (2 self)
 Add to MetaCart
Featurebased methods for nonrigid registration frequently encounter the correspondence problem. Regardless of whether points, lines, curves or surface parameterizations are used, featurebased nonrigid matching requires us to automatically solve for correspondences between two sets of features. In addition, there could be many features in either set that have no counterparts in the other. This outlier rejection problem further complicates an already di#cult correspondence problem. We formulate featurebased nonrigid registration as a nonrigid point matching problem. After a careful review of the problem and an indepth examination of two types of methods previously designed for rigid robust point matching (RPM), we propose a new general framework for nonrigid point matching. We consider it a general framework because it does not depend on any particular form of spatial mapping. We have also developed an algorithmthe TPSRPM algorithmwith the thinplate spline (TPS) as the parameterization of the nonrigid spatial mapping and the softassign for the correspondence. The performance of the TPSRPM algorithm is demonstrated and validated in a series of carefully designed synthetic experiments. In each of these experiments, an empirical comparison with the popular iterated closest point (ICP) algorithm is also provided. Finally, we apply the algorithm to the problem of nonrigid registration of cortical anatomical structures which is required in brain mapping. While these results are somewhat preliminary, they clearly demonstrate the applicability of our approach to real world tasks involving featurebased nonrigid registration.
A DoubleLoop Algorithm to Minimize the Bethe and Kikuchi Free Energies
 NEURAL COMPUTATION
, 2001
"... Recent work (Yedidia, Freeman, Weiss [22]) has shown that stable points of belief propagation (BP) algorithms [12] for graphs with loops correspond to extrema of the Bethe free energy [3]. These BP algorithms have been used to obtain good solutions to problems for which alternative algorithms fail t ..."
Abstract

Cited by 108 (4 self)
 Add to MetaCart
Recent work (Yedidia, Freeman, Weiss [22]) has shown that stable points of belief propagation (BP) algorithms [12] for graphs with loops correspond to extrema of the Bethe free energy [3]. These BP algorithms have been used to obtain good solutions to problems for which alternative algorithms fail to work [4], [5], [10] [11]. In this paper we rst obtain the dual energy of the Bethe free energy which throws light on the BP algorithm. Next we introduce a discrete iterative algorithm which we prove is guaranteed to converge to a minimum of the Bethe free energy. We call this the doubleloop algorithm because it contains an inner and an outer loop. It extends a class of mean eld theory algorithms developed by [7],[8] and, in particular, [13]. Moreover, the doubleloop algorithm is formally very similar to BP which may help understand when BP converges. Finally, we extend all our results to the Kikuchi approximation which includes the Bethe free energy as a special case [3]. (Yedidia et al [22] showed that a \generalized belief propagation" algorithm also has its xed points at extrema of the Kikuchi free energy). We are able both to obtain a dual formulation for Kikuchi but also obtain a doubleloop discrete iterative algorithm that is guaranteed to converge to a minimum of the Kikuchi free energy. It is anticipated that these doubleloop algorithms will be useful for solving optimization problems in computer vision and other applications.
New Algorithms for 2D and 3D Point Matching: Pose Estimation and Correspondence
"... A fundamental open problem in computer visiondetermining pose and correspondence between two sets of points in spaceis solved with a novel, fast [O(nm)], robust and easily implementable algorithm. The technique works on noisy 2D or 3D point sets that may be of unequal sizes and may differ by n ..."
Abstract

Cited by 85 (19 self)
 Add to MetaCart
A fundamental open problem in computer visiondetermining pose and correspondence between two sets of points in spaceis solved with a novel, fast [O(nm)], robust and easily implementable algorithm. The technique works on noisy 2D or 3D point sets that may be of unequal sizes and may differ by nonrigid transformations. Using a combination of optimization techniques such as deterministic annealing and the softassign, which have recently emerged out of the recurrent neural network/statistical physics framework, analog objective functions describing the problems are minimized. Over thirty thousand experiments, on randomly generated points sets with varying amounts of noise and missing and spurious points, and on handwritten character sets demonstrate the robustness of the algorithm. Keywords: Pointmatching, pose estimation, correspondence, neural networks, optimization, softassign, deterministic annealing, affine. 1 Introduction Matching the representations of two images has long...
Symmetrybased Indexing of Image Databases
 J. VISUAL COMMUNICATION AND IMAGE REPRESENTATION
, 1998
"... The use of shape as a cue for indexing into pictorial databases has been traditionally based on global invariant statistics and deformable templates, on the one hand, and local edge correlation on the other. This paper proposes an intermediate approach based on a characterization of the symmetry in ..."
Abstract

Cited by 76 (5 self)
 Add to MetaCart
The use of shape as a cue for indexing into pictorial databases has been traditionally based on global invariant statistics and deformable templates, on the one hand, and local edge correlation on the other. This paper proposes an intermediate approach based on a characterization of the symmetry in edge maps. The use of symmetry matching as a joint correlation measure between pairs of edge elements further constrains the comparison of edge maps. In addition, a natural organization of groups of symmetry into a hierarchy leads to a graphbased representation of relational structure of components of shape that allows for deformations by changing attributes of this relational graph. A graduate assignment graph matching algorithm is used to match symmetry structure in images to stored prototypes or sketches. The results of matching sketches and greyscale images against a small database consisting of a variety of fish, planes, tools, etc., are depicted.
A Deterministic Strongly Polynomial Algorithm for Matrix Scaling and Approximate Permanents
"... We present a deterministic strongly polynomial algorithm that computes the permanent of a nonnegative n x n matrix to within a multiplicative factor of e^n. To this end ..."
Abstract

Cited by 63 (8 self)
 Add to MetaCart
We present a deterministic strongly polynomial algorithm that computes the permanent of a nonnegative n x n matrix to within a multiplicative factor of e^n. To this end
The Softassign Procrustes Matching Algorithm
 Information Processing in Medical Imaging
, 1997
"... . The problem of matching shapes parameterized as a set of points is frequently encountered in medical imaging tasks. When the pointsets are derived from landmarks, there is usually no problem of determining the correspondences or homologies between the two sets of landmarks. However, when the poin ..."
Abstract

Cited by 60 (4 self)
 Add to MetaCart
. The problem of matching shapes parameterized as a set of points is frequently encountered in medical imaging tasks. When the pointsets are derived from landmarks, there is usually no problem of determining the correspondences or homologies between the two sets of landmarks. However, when the point sets are automatically derived from images, the difficult problem of establishing correspondence and rejecting nonhomologies as outliers remains. The Procrustes method is a wellknown method of shape comparison and can always be pressed into service when homologies between pointsets are known in advance. This paper presents a powerful extension of the Procrustes method to pointsets of differing point counts with correspondences unknown. The result is the softassign Procrustes matching algorithm which iteratively establishes correspondence, rejects nonhomologies as outliers, determines the Procrustes rescaling and the spatial mapping between the pointsets. 1 Introduction One of the mos...
A Unifying Theorem for Spectral Embedding and Clustering
, 2003
"... Spectral methods use selected eigenvectors of a data affinity matrix to obtain a data representation that can be trivially clustered or embedded in a lowdimensional space. We present a theorem that explains, for broad classes of affinity matrices and eigenbases, why this works: For successive ..."
Abstract

Cited by 55 (0 self)
 Add to MetaCart
Spectral methods use selected eigenvectors of a data affinity matrix to obtain a data representation that can be trivially clustered or embedded in a lowdimensional space. We present a theorem that explains, for broad classes of affinity matrices and eigenbases, why this works: For successively smaller eigenbases (i.e., using fewer and fewer of the affinity matrix's dominant eigenvalues and eigenvectors), the angles between "similar" vectors in the new representation shrink while the angles between "dissimilar" vectors grow. Specifically, the sum of the squared cosines of the angles is strictly increasing as the dimensionality of the representation decreases. Thus spectral methods work because the truncated eigenbasis amplifies structure in the data so that any heuristic postprocessing is more likely to succeed. We use this result to construct a nonlinear dimensionality reduction (NLDR) algorithm for data sampled from manifolds whose intrinsic coordinate system has linear and cyclic axes, and a novel clusteringbyprojections algorithm that requires no postprocessing and gives superior performance on "challenge problems" from the recent literature.
Comprehensive Colour Image Normalization
, 1998
"... . The same scene viewed under two different illuminants induces two different colour images. If the two illuminants are the same colour but are placed at different positions then corresponding rgb pixels are related by simple scale factors. In contrast if the lighting geometry is held fixed but the ..."
Abstract

Cited by 46 (5 self)
 Add to MetaCart
. The same scene viewed under two different illuminants induces two different colour images. If the two illuminants are the same colour but are placed at different positions then corresponding rgb pixels are related by simple scale factors. In contrast if the lighting geometry is held fixed but the colour of the light changes then it is the individual colour channels (e.g. all the red pixel values or all the green pixels) that are a scaling apart. It is well known that the image dependencies due to lighting geometry and illuminant colour can be respectively removed by normalizing the magnitude of the rgb pixel triplets (e.g. by calculating chromaticities) and by normalizing the lengths of each colour channel (by running the `greyworld' colour constancy algorithm). However, neither normalization suffices to account for changes in both the lighting geometry and illuminant colour. In this paper we present a new comprehensive image normalization which removes image dependency on lighting...
The concaveconvex procedure (CCCP)
, 2003
"... The ConcaveConvex procedure (CCCP) is a way to construct discrete time iterative dynamical systems which are guaranteed to monotonically decrease global optimization/energy functions. This procedure can be applied to almost any optimization problem and many existing algorithms can be interpreted ..."
Abstract

Cited by 46 (5 self)
 Add to MetaCart
The ConcaveConvex procedure (CCCP) is a way to construct discrete time iterative dynamical systems which are guaranteed to monotonically decrease global optimization/energy functions. This procedure can be applied to almost any optimization problem and many existing algorithms can be interpreted in terms of it. In particular, we prove that all EM algorithms and classes of Legendre minimization and variational bounding algorithms can be reexpressed in terms of CCCP. We show that many existing neural network and mean field theory algorithms are also examples of CCCP. The Generalized Iterative Scaling (GIS) algorithm and Sinkhorn’s algorithm can also be expressed as CCCP by changing variables. CCCP can be used both as a new way to understand, and prove convergence of, existing optimization algorithms and as a procedure for generating new algorithms.