Results 1  10
of
21
A Graduated Assignment Algorithm for Graph Matching
, 1996
"... A graduated assignment algorithm for graph matching is presented which is fast and accurate even in the presence of high noise. By combining graduated nonconvexity, twoway (assignment) constraints, and sparsity, large improvements in accuracy and speed are achieved. Its low order computational comp ..."
Abstract

Cited by 291 (15 self)
 Add to MetaCart
A graduated assignment algorithm for graph matching is presented which is fast and accurate even in the presence of high noise. By combining graduated nonconvexity, twoway (assignment) constraints, and sparsity, large improvements in accuracy and speed are achieved. Its low order computational complexity [O(lm), where l and m are the number of links in the two graphs] and robustness in the presence of noise offer advantages over traditional combinatorial approaches. The algorithm, not restricted to any special class of graph, is applied to subgraph isomorphism, weighted graph matching, and attributed relational graph matching. To illustrate the performance of the algorithm, attributed relational graphs derived from objects are matched. Then, results from twentyfive thousand experiments conducted on 100 node random graphs of varying types (graphs with only zeroone links, weighted graphs, and graphs with node attributes and multiple link types) are reported. No comparable results have...
Learning with Labeled and Unlabeled Data
, 2001
"... In this paper, on the one hand, we aim to give a review on literature dealing with the problem of supervised learning aided by additional unlabeled data. On the other hand, being a part of the author's first year PhD report, the paper serves as a frame to bundle related work by the author as we ..."
Abstract

Cited by 170 (3 self)
 Add to MetaCart
In this paper, on the one hand, we aim to give a review on literature dealing with the problem of supervised learning aided by additional unlabeled data. On the other hand, being a part of the author's first year PhD report, the paper serves as a frame to bundle related work by the author as well as numerous suggestions for potential future work. Therefore, this work contains more speculative and partly subjective material than the reader might expect from a literature review. We give a rigorous definition of the problem and relate it to supervised and unsupervised learning. The crucial role of prior knowledge is put forward, and we discuss the important notion of inputdependent regularization. We postulate a number of baseline methods, being algorithms or algorithmic schemes which can more or less straightforwardly be applied to the problem, without the need for genuinely new concepts. However, some of them might serve as basis for a genuine method. In the literature revi...
SelfOrganizing Maps, Vector Quantization, and Mixture Modeling
 IEEE Transactions on Neural Networks
, 2001
"... Selforganizing maps are popular algorithms for unsupervised learning and data visualization. Exploiting the link between vector quantization and mixture modeling, we derive EM algorithms for selforganizing maps with and without missing values. We compare selforganizing maps with the elasticnet ..."
Abstract

Cited by 25 (0 self)
 Add to MetaCart
Selforganizing maps are popular algorithms for unsupervised learning and data visualization. Exploiting the link between vector quantization and mixture modeling, we derive EM algorithms for selforganizing maps with and without missing values. We compare selforganizing maps with the elasticnet approach and explain why the former is better suited for the visualization of highdimensional data. Several extensions and improvements are discussed. As an illustration we apply a selforganizing map based on a multinomial distribution to market basket analysis. I. Introduction Selforganizing maps are popular tools for clustering and visualization of highdimensional data [1], [2]. The wellknown Kohonen learning algorithm can be interpreted as a variant of vector quantization with additional lateral interactions [3], [4]. The addition of lateral interaction between units introduces a sense of topology, such that neighboring units represent inputs that are close together in input space [...
A Global Optimization Technique for Statistical Classifier Design
 IEEE Transactions on Signal Processing
"... A global optimization method is introduced for the design of statistical classifiers that minimize the rate of misclassification. We first derive the theoretical basis for the method, based on which we develop a novel design algorithm and demonstrate its effectiveness and superior performance in the ..."
Abstract

Cited by 25 (9 self)
 Add to MetaCart
A global optimization method is introduced for the design of statistical classifiers that minimize the rate of misclassification. We first derive the theoretical basis for the method, based on which we develop a novel design algorithm and demonstrate its effectiveness and superior performance in the design of practical classifiers for some of the most popular structures currently in use. The method, grounded in ideas from statistical physics and information theory, extends the deterministic annealing approach for optimization, both to incorporate structural constraints on data assignments to classes and to minimize the probability of error as the cost objective. During the design, data are assigned to classes in probability, so as to minimize the expected classification error given a specified level of randomness, as measured by Shannon's entropy. The constrained optimization is equivalent to a free energy minimization, motivating a deterministic annealing approach in which the entropy...
A Unified Nonrigid Feature Registration Method For Brain Mapping
, 2002
"... This paper describes the design, implementation and results of a unified nonrigid feature registration method for the purposes of of anatomical MRI brain registration. An important characteristic of the method is its ability to take into account the spatial interrelationships of different types of ..."
Abstract

Cited by 23 (3 self)
 Add to MetaCart
This paper describes the design, implementation and results of a unified nonrigid feature registration method for the purposes of of anatomical MRI brain registration. An important characteristic of the method is its ability to take into account the spatial interrelationships of different types of features. We demonstrate the application of the method using two different types of features: the outer cortical surface and major sulcal ribbons. Points subsampled from each type of feature are fused into a common 3D pointset representation. Nonrigid registration of the features is then performed using a new robust nonrigid point matching algorithm. The point matching algorithm implements an iterative joint clustering and matching (JCM) strategy which effectively reduces the computational complexity without sacrificing accuracy. We have conducted carefully designed synthetic experiments to gauge the effect of using different types of features either separately or together. A validation study examining the accuracy of nonrigid alignment of many brain structures is also presented. Finally, we present anecdotal results on the alignment of two subject MRI brain data.
On Fitting Mixture Models
, 1999
"... Consider the problem of fitting a finite Gaussian mixture, with an unknown number of components, to observed data. This paper proposes a new minimum description length (MDL) type criterion, termed MMDL (for mixture MDL), to select the number of components of the model. MMDL is based on the ident ..."
Abstract

Cited by 22 (4 self)
 Add to MetaCart
Consider the problem of fitting a finite Gaussian mixture, with an unknown number of components, to observed data. This paper proposes a new minimum description length (MDL) type criterion, termed MMDL (for mixture MDL), to select the number of components of the model. MMDL is based on the identification of an "equivalent sample size", for each component, which does not coincide with the full sample size. We also introduce an algorithm based on the standard expectationmaximization (EM) approach together with a new agglomerative step, called agglomerative EM (AEM). The experiments here reported have shown that MMDL outperforms existing criteria of comparable computational cost. The good behavior of AEM, namely its good robustness with respect to initialization, is also illustrated experimentally.
FREM: Fast and Robust EM Clustering for Large Data Sets
 In ACM CIKM Conference
, 2002
"... Clustering is a fundamental Data Mining technique. This article presents an improved EM algorithm to cluster large data sets having high dimensionality, noise and zero variance problems. The algorithm incorporates improvements to increase the quality of solutions and speed. In general the algorithm ..."
Abstract

Cited by 16 (2 self)
 Add to MetaCart
Clustering is a fundamental Data Mining technique. This article presents an improved EM algorithm to cluster large data sets having high dimensionality, noise and zero variance problems. The algorithm incorporates improvements to increase the quality of solutions and speed. In general the algorithm can find a good clustering solution in 3 scans over the data set. Alternatively, it can be run until it converges. The algorithm has a few parameters that are easy to set and have defaults for most cases. The proposed algorithm is compared against the standard EM algorithm and the OnLine EM algorithm.
A Bayesian Joint Mixture Framework for the Integration of Anatomical Information in Functional Image Reconstruction
 Journal of Mathematical Imaging and Vision
, 1998
"... We present a Bayesian joint mixture framework for integrating anatomical image intensity and region segmentation information into emission tomographic reconstruction in medical imaging. The joint mixture framework is particularly well suited for this problem and allows us to integrate additional ava ..."
Abstract

Cited by 9 (3 self)
 Add to MetaCart
We present a Bayesian joint mixture framework for integrating anatomical image intensity and region segmentation information into emission tomographic reconstruction in medical imaging. The joint mixture framework is particularly well suited for this problem and allows us to integrate additional available information such as anatomical region segmentation information into the Bayesian model. Since this information is independently available as opposed to being estimated, it acts as a good constraint on the joint mixture model. After specifying the joint mixture model, we combine it with the standard emission tomographic likelihood. The Bayesian posterior is a combination of this likelihood and the joint mixture prior. Since well known EM algorithms separately exist for both the emission tomography (ET) likelihood and the joint mixture prior, we have designed a novel EM 2 algorithm that comprises two EM algorithmsone for the likelihood and one for the prior. Despite being dovetail...
Learning in Compositional Hierarchies: Inducing the Structure of Objects from Data
 In Advances in Neural Information Processing Systems 6
, 1994
"... I propose a learning algorithm for learning hierarchical models for object recognition. The model architecture is a compositional hierarchy that represents partwhole relationships: parts are described in the local context of substructures of the object. The focus of this report is learning hierarch ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
I propose a learning algorithm for learning hierarchical models for object recognition. The model architecture is a compositional hierarchy that represents partwhole relationships: parts are described in the local context of substructures of the object. The focus of this report is learning hierarchical models from data, i.e. inducing the structure of model prototypes from observed exemplars of an object. At each node in the hierarchy, a probability distribution governing its parameters must be learned. The connections between nodes reflects the structure of the object. The formulation of substructures is encouraged such that their parts become conditionally independent. The resulting model can be interpreted as a Bayesian Belief Network and also is in many respects similar to the stochastic visual grammar described by Mjolsness. 1 INTRODUCTION Modelbased object recognition solves the problem of invariant recognition by relying on stored prototypes at unit scale positioned at the ori...