Results 1  10
of
26
A Graduated Assignment Algorithm for Graph Matching
, 1996
"... A graduated assignment algorithm for graph matching is presented which is fast and accurate even in the presence of high noise. By combining graduated nonconvexity, twoway (assignment) constraints, and sparsity, large improvements in accuracy and speed are achieved. Its low order computational comp ..."
Abstract

Cited by 285 (15 self)
 Add to MetaCart
A graduated assignment algorithm for graph matching is presented which is fast and accurate even in the presence of high noise. By combining graduated nonconvexity, twoway (assignment) constraints, and sparsity, large improvements in accuracy and speed are achieved. Its low order computational complexity [O(lm), where l and m are the number of links in the two graphs] and robustness in the presence of noise offer advantages over traditional combinatorial approaches. The algorithm, not restricted to any special class of graph, is applied to subgraph isomorphism, weighted graph matching, and attributed relational graph matching. To illustrate the performance of the algorithm, attributed relational graphs derived from objects are matched. Then, results from twentyfive thousand experiments conducted on 100 node random graphs of varying types (graphs with only zeroone links, weighted graphs, and graphs with node attributes and multiple link types) are reported. No comparable results have...
Simulated annealing: Practice versus theory
 Mathl. Comput. Modelling
, 1993
"... this paper "ergodic" is used in a very weak sense, as it is not proposed, theoretically or practically, that all states of the system are actually to be visited ..."
Abstract

Cited by 156 (20 self)
 Add to MetaCart
this paper "ergodic" is used in a very weak sense, as it is not proposed, theoretically or practically, that all states of the system are actually to be visited
Structural matching by discrete relaxation
 IEEE Transactions on Pattern Analysis and Machine Intelligence
, 1997
"... Abstract—This paper describes a Bayesian framework for performing relational graph matching by discrete relaxation. Our basic aim is to draw on this framework to provide a comparative evaluation of a number of contrasting approaches to relational matching. Broadly speaking there are two main aspects ..."
Abstract

Cited by 106 (29 self)
 Add to MetaCart
Abstract—This paper describes a Bayesian framework for performing relational graph matching by discrete relaxation. Our basic aim is to draw on this framework to provide a comparative evaluation of a number of contrasting approaches to relational matching. Broadly speaking there are two main aspects to this study. Firstly we focus on the issue of how relational inexactness may be quantified. We illustrate that several popular relational distance measures can be recovered as specific limiting cases of the Bayesian consistency measure. The second aspect of our comparison concerns the way in which structural inexactness is controlled. We investigate three different realizations of the matching process which draw on contrasting control models. The main conclusion of our study is that the active process of graphediting outperforms the alternatives in terms of its ability to effectively control a large population of contaminating clutter.
New Algorithms for 2D and 3D Point Matching: Pose Estimation and Correspondence
"... A fundamental open problem in computer visiondetermining pose and correspondence between two sets of points in spaceis solved with a novel, fast [O(nm)], robust and easily implementable algorithm. The technique works on noisy 2D or 3D point sets that may be of unequal sizes and may differ by n ..."
Abstract

Cited by 85 (19 self)
 Add to MetaCart
A fundamental open problem in computer visiondetermining pose and correspondence between two sets of points in spaceis solved with a novel, fast [O(nm)], robust and easily implementable algorithm. The technique works on noisy 2D or 3D point sets that may be of unequal sizes and may differ by nonrigid transformations. Using a combination of optimization techniques such as deterministic annealing and the softassign, which have recently emerged out of the recurrent neural network/statistical physics framework, analog objective functions describing the problems are minimized. Over thirty thousand experiments, on randomly generated points sets with varying amounts of noise and missing and spurious points, and on handwritten character sets demonstrate the robustness of the algorithm. Keywords: Pointmatching, pose estimation, correspondence, neural networks, optimization, softassign, deterministic annealing, affine. 1 Introduction Matching the representations of two images has long...
Symmetrybased Indexing of Image Databases
 J. VISUAL COMMUNICATION AND IMAGE REPRESENTATION
, 1998
"... The use of shape as a cue for indexing into pictorial databases has been traditionally based on global invariant statistics and deformable templates, on the one hand, and local edge correlation on the other. This paper proposes an intermediate approach based on a characterization of the symmetry in ..."
Abstract

Cited by 76 (5 self)
 Add to MetaCart
The use of shape as a cue for indexing into pictorial databases has been traditionally based on global invariant statistics and deformable templates, on the one hand, and local edge correlation on the other. This paper proposes an intermediate approach based on a characterization of the symmetry in edge maps. The use of symmetry matching as a joint correlation measure between pairs of edge elements further constrains the comparison of edge maps. In addition, a natural organization of groups of symmetry into a hierarchy leads to a graphbased representation of relational structure of components of shape that allows for deformations by changing attributes of this relational graph. A graduate assignment graph matching algorithm is used to match symmetry structure in images to stored prototypes or sketches. The results of matching sketches and greyscale images against a small database consisting of a variety of fish, planes, tools, etc., are depicted.
Structural graph matching using the em algorithm and singular value decomposition
 IEEE Trans. PAMI
, 2001
"... AbstractÐThis paper describes an efficient algorithm for inexact graph matching. The method is purely structural, that is to say, it uses only the edge or connectivity structure of the graph and does not draw on node or edge attributes. We make two contributions. Commencing from a probability distri ..."
Abstract

Cited by 66 (8 self)
 Add to MetaCart
AbstractÐThis paper describes an efficient algorithm for inexact graph matching. The method is purely structural, that is to say, it uses only the edge or connectivity structure of the graph and does not draw on node or edge attributes. We make two contributions. Commencing from a probability distribution for matching errors, we show how the problem of graph matching can be posed as maximumlikelihood estimation using the apparatus of the EM algorithm. Our second contribution is to cast the recovery of correspondence matches between the graph nodes in a matrix framework. This allows us to efficiently recover correspondence matches using singular value decomposition. We experiment with the method on both realworld and synthetic data. Here, we demonstrate that the method offers comparable performance to more computationally demanding methods. Index TermsÐInexact graph matching, EM algorithm, matrix factorization, mixture models, Delaunay triangulations. 1
Vector Quantization with Complexity Costs
, 1993
"... Vector quantization is a data compression method where a set of data points is encoded by a reduced set of reference vectors, the codebook. We discuss a vector quantization strategy which jointly optimizes distortion errors and the codebook complexity, thereby, determining the size of the codebook. ..."
Abstract

Cited by 54 (18 self)
 Add to MetaCart
Vector quantization is a data compression method where a set of data points is encoded by a reduced set of reference vectors, the codebook. We discuss a vector quantization strategy which jointly optimizes distortion errors and the codebook complexity, thereby, determining the size of the codebook. A maximum entropy estimation of the cost function yields an optimal number of reference vectors, their positions and their assignment probabilities. The dependence of the codebook density on the data density for different complexity functions is investigated in the limit of asymptotic quantization levels. How different complexity measures influence the efficiency of vector quantizers is studied for the task of image compression, i.e., we quantize the wavelet coefficients of gray level images and measure the reconstruction error. Our approach establishes a unifying framework for different quantization methods like Kmeans clustering and its fuzzy version, entropy constrained vector quantizati...
Replicator Equations, Maximal Cliques, and Graph Isomorphism
, 1999
"... We present a new energyminimization framework for the graph isomorphism problem that is based on an equivalent maximum clique formulation. The approach is centered around a fundamental result proved by Motzkin and Straus in the mid1960s, and recently expanded in various ways, which allows us to fo ..."
Abstract

Cited by 53 (11 self)
 Add to MetaCart
We present a new energyminimization framework for the graph isomorphism problem that is based on an equivalent maximum clique formulation. The approach is centered around a fundamental result proved by Motzkin and Straus in the mid1960s, and recently expanded in various ways, which allows us to formulate the maximum clique problem in terms of a standard quadratic program. The attractive feature of this formulation is that a clear onetoone correspondence exists between the solutions of the quadratic program and those in the original, combinatorial problem. To solve the program we use the socalled replicator equations—a class of straightforward continuous and discretetime dynamical systems developed in various branches of theoretical biology. We show how, despite their inherent inability to escape from local solutions, they nevertheless provide experimental results that are competitive with those obtained using more elaborate meanfield annealing heuristics.
A Novel Optimizing Network Architecture with Applications
 Neural Computation
, 1996
"... We present a novel optimizing network architecture with applications in vision, learning, pattern recognition and combinatorial optimization. This architecture is constructed by combining the following techniques: (i) deterministic annealing, (ii) selfamplification, (iii) algebraic transformations, ..."
Abstract

Cited by 35 (16 self)
 Add to MetaCart
We present a novel optimizing network architecture with applications in vision, learning, pattern recognition and combinatorial optimization. This architecture is constructed by combining the following techniques: (i) deterministic annealing, (ii) selfamplification, (iii) algebraic transformations, (iv) clocked objectives and (v) softassign. Deterministic annealing in conjunction with selfamplification avoids poor local minima and ensures that a vertex of the hypercube is reached. Algebraic transformations and clocked objectives help partition the relaxation into distinct phases. The problems considered have doubly stochastic matrix constraints or minor variations thereof. We introduce a new technique, softassign, which is used to satisfy this constraint. Experimental results on different problems are presented and discussed. 1
A Unifying Objective Function for Topographic Mappings
, 1997
"... Many different algorithms and objective functions for topographic mappings have been proposed. We show that several of these approaches can be seen as particular cases of a more general objective function. Consideration of a very simple mapping problem reveals large differences in the form of the ma ..."
Abstract

Cited by 30 (4 self)
 Add to MetaCart
Many different algorithms and objective functions for topographic mappings have been proposed. We show that several of these approaches can be seen as particular cases of a more general objective function. Consideration of a very simple mapping problem reveals large differences in the form of the map that each particular case favors. These differences have important consequences for the practical application of topographic mapping methods.