Results 1  10
of
15
A Graduated Assignment Algorithm for Graph Matching
, 1996
"... A graduated assignment algorithm for graph matching is presented which is fast and accurate even in the presence of high noise. By combining graduated nonconvexity, twoway (assignment) constraints, and sparsity, large improvements in accuracy and speed are achieved. Its low order computational comp ..."
Abstract

Cited by 285 (15 self)
 Add to MetaCart
A graduated assignment algorithm for graph matching is presented which is fast and accurate even in the presence of high noise. By combining graduated nonconvexity, twoway (assignment) constraints, and sparsity, large improvements in accuracy and speed are achieved. Its low order computational complexity [O(lm), where l and m are the number of links in the two graphs] and robustness in the presence of noise offer advantages over traditional combinatorial approaches. The algorithm, not restricted to any special class of graph, is applied to subgraph isomorphism, weighted graph matching, and attributed relational graph matching. To illustrate the performance of the algorithm, attributed relational graphs derived from objects are matched. Then, results from twentyfive thousand experiments conducted on 100 node random graphs of varying types (graphs with only zeroone links, weighted graphs, and graphs with node attributes and multiple link types) are reported. No comparable results have...
Learning with Labeled and Unlabeled Data
, 2001
"... In this paper, on the one hand, we aim to give a review on literature dealing with the problem of supervised learning aided by additional unlabeled data. On the other hand, being a part of the author's first year PhD report, the paper serves as a frame to bundle related work by the author as well as ..."
Abstract

Cited by 165 (3 self)
 Add to MetaCart
In this paper, on the one hand, we aim to give a review on literature dealing with the problem of supervised learning aided by additional unlabeled data. On the other hand, being a part of the author's first year PhD report, the paper serves as a frame to bundle related work by the author as well as numerous suggestions for potential future work. Therefore, this work contains more speculative and partly subjective material than the reader might expect from a literature review. We give a rigorous definition of the problem and relate it to supervised and unsupervised learning. The crucial role of prior knowledge is put forward, and we discuss the important notion of inputdependent regularization. We postulate a number of baseline methods, being algorithms or algorithmic schemes which can more or less straightforwardly be applied to the problem, without the need for genuinely new concepts. However, some of them might serve as basis for a genuine method. In the literature revi...
Latent Variable Models for Neural Data Analysis
, 1999
"... The brain is perhaps the most complex system to have ever been subjected to rigorous scientific investigation. The scale is staggering: over 1011 neurons, each making an average of 10 3 synapses, with computation occurring on scales ranging from a single dendritic spine, to an entire cortical area. ..."
Abstract

Cited by 42 (5 self)
 Add to MetaCart
The brain is perhaps the most complex system to have ever been subjected to rigorous scientific investigation. The scale is staggering: over 1011 neurons, each making an average of 10 3 synapses, with computation occurring on scales ranging from a single dendritic spine, to an entire cortical area. Slowly, we are beginning to acquire experimental tools that can gather the massive amounts of data needed to characterize this system. However, to understand and interpret these data will also require substantial strides in inferential and statistical techniques. This dissertation attempts to meet this need, extending and applying the modern tools of latent variable modeling to problems in neural data analysis. It is divided
SelfOrganizing Maps, Vector Quantization, and Mixture Modeling
 IEEE Transactions on Neural Networks
, 2001
"... Selforganizing maps are popular algorithms for unsupervised learning and data visualization. Exploiting the link between vector quantization and mixture modeling, we derive EM algorithms for selforganizing maps with and without missing values. We compare selforganizing maps with the elasticnet ..."
Abstract

Cited by 25 (0 self)
 Add to MetaCart
Selforganizing maps are popular algorithms for unsupervised learning and data visualization. Exploiting the link between vector quantization and mixture modeling, we derive EM algorithms for selforganizing maps with and without missing values. We compare selforganizing maps with the elasticnet approach and explain why the former is better suited for the visualization of highdimensional data. Several extensions and improvements are discussed. As an illustration we apply a selforganizing map based on a multinomial distribution to market basket analysis. I. Introduction Selforganizing maps are popular tools for clustering and visualization of highdimensional data [1], [2]. The wellknown Kohonen learning algorithm can be interpreted as a variant of vector quantization with additional lateral interactions [3], [4]. The addition of lateral interaction between units introduces a sense of topology, such that neighboring units represent inputs that are close together in input space [...
A Global Optimization Technique for Statistical Classifier Design
 IEEE Transactions on Signal Processing
"... A global optimization method is introduced for the design of statistical classifiers that minimize the rate of misclassification. We first derive the theoretical basis for the method, based on which we develop a novel design algorithm and demonstrate its effectiveness and superior performance in the ..."
Abstract

Cited by 25 (9 self)
 Add to MetaCart
A global optimization method is introduced for the design of statistical classifiers that minimize the rate of misclassification. We first derive the theoretical basis for the method, based on which we develop a novel design algorithm and demonstrate its effectiveness and superior performance in the design of practical classifiers for some of the most popular structures currently in use. The method, grounded in ideas from statistical physics and information theory, extends the deterministic annealing approach for optimization, both to incorporate structural constraints on data assignments to classes and to minimize the probability of error as the cost objective. During the design, data are assigned to classes in probability, so as to minimize the expected classification error given a specified level of randomness, as measured by Shannon's entropy. The constrained optimization is equivalent to a free energy minimization, motivating a deterministic annealing approach in which the entropy...
On Fitting Mixture Models
, 1999
"... Consider the problem of fitting a finite Gaussian mixture, with an unknown number of components, to observed data. This paper proposes a new minimum description length (MDL) type criterion, termed MMDL (for mixture MDL), to select the number of components of the model. MMDL is based on the ident ..."
Abstract

Cited by 22 (4 self)
 Add to MetaCart
Consider the problem of fitting a finite Gaussian mixture, with an unknown number of components, to observed data. This paper proposes a new minimum description length (MDL) type criterion, termed MMDL (for mixture MDL), to select the number of components of the model. MMDL is based on the identification of an "equivalent sample size", for each component, which does not coincide with the full sample size. We also introduce an algorithm based on the standard expectationmaximization (EM) approach together with a new agglomerative step, called agglomerative EM (AEM). The experiments here reported have shown that MMDL outperforms existing criteria of comparable computational cost. The good behavior of AEM, namely its good robustness with respect to initialization, is also illustrated experimentally.
FREM: Fast and Robust EM Clustering for Large Data Sets
 In ACM CIKM Conference
, 2002
"... Clustering is a fundamental Data Mining technique. This article presents an improved EM algorithm to cluster large data sets having high dimensionality, noise and zero variance problems. The algorithm incorporates improvements to increase the quality of solutions and speed. In general the algorithm ..."
Abstract

Cited by 16 (2 self)
 Add to MetaCart
Clustering is a fundamental Data Mining technique. This article presents an improved EM algorithm to cluster large data sets having high dimensionality, noise and zero variance problems. The algorithm incorporates improvements to increase the quality of solutions and speed. In general the algorithm can find a good clustering solution in 3 scans over the data set. Alternatively, it can be run until it converges. The algorithm has a few parameters that are easy to set and have defaults for most cases. The proposed algorithm is compared against the standard EM algorithm and the OnLine EM algorithm.
A Bayesian Joint Mixture Framework for the Integration of Anatomical Information in Functional Image Reconstruction
 Journal of Mathematical Imaging and Vision
, 1998
"... We present a Bayesian joint mixture framework for integrating anatomical image intensity and region segmentation information into emission tomographic reconstruction in medical imaging. The joint mixture framework is particularly well suited for this problem and allows us to integrate additional ava ..."
Abstract

Cited by 9 (3 self)
 Add to MetaCart
We present a Bayesian joint mixture framework for integrating anatomical image intensity and region segmentation information into emission tomographic reconstruction in medical imaging. The joint mixture framework is particularly well suited for this problem and allows us to integrate additional available information such as anatomical region segmentation information into the Bayesian model. Since this information is independently available as opposed to being estimated, it acts as a good constraint on the joint mixture model. After specifying the joint mixture model, we combine it with the standard emission tomographic likelihood. The Bayesian posterior is a combination of this likelihood and the joint mixture prior. Since well known EM algorithms separately exist for both the emission tomography (ET) likelihood and the joint mixture prior, we have designed a novel EM 2 algorithm that comprises two EM algorithmsone for the likelihood and one for the prior. Despite being dovetail...
NonRigid Point Matching: Algorithms, Extensions and Applications
, 2001
"... A new algorithm has been developed in this thesis for the nonrigid point matching problem. Designed as an integrated framework, the algorithm jointly estimates a onetoone correspondence and a nonrigid transformation between two sets of points. The resulting algorithm is called “robust point matc ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
A new algorithm has been developed in this thesis for the nonrigid point matching problem. Designed as an integrated framework, the algorithm jointly estimates a onetoone correspondence and a nonrigid transformation between two sets of points. The resulting algorithm is called “robust point matching (RPM) algorithm ” because of its capability to tolerate noise and to reject possible outliers existed within the data points. The algorithm is built upon the heuristic of “fuzzy correspondence”, which allows for multiple partial correspondences between points. With the help of the deterministic annealing technique, this new heuristic enables the algorithm to overcome many local minima that can be encountered in the matching process. Devised as a general point matching framework, the algorithm can be easily extended to accommodate different speci£c requirements in many registration applications. Firstly, the modular design of the transformation module enables convenient incorporation of different nonrigid splines. Secondly, the point matching algorithm can be easily extended into a symmetric joint clusteringmatching framework. It will be shown that by introducing a super pointset, the joint clustermatching extension can be applied to estimate an average shape pointset from multiple point shape sets. The algorithm is applied to the registration of 3D brain anatomical structures. We proposed in this work a joint feature registration framework, which is mainly based on the joint clusteringmatching extension of the robust
Learning in Compositional Hierarchies: Inducing the Structure of Objects from Data
 In Advances in Neural Information Processing Systems 6
, 1994
"... I propose a learning algorithm for learning hierarchical models for object recognition. The model architecture is a compositional hierarchy that represents partwhole relationships: parts are described in the local context of substructures of the object. The focus of this report is learning hierarch ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
I propose a learning algorithm for learning hierarchical models for object recognition. The model architecture is a compositional hierarchy that represents partwhole relationships: parts are described in the local context of substructures of the object. The focus of this report is learning hierarchical models from data, i.e. inducing the structure of model prototypes from observed exemplars of an object. At each node in the hierarchy, a probability distribution governing its parameters must be learned. The connections between nodes reflects the structure of the object. The formulation of substructures is encouraged such that their parts become conditionally independent. The resulting model can be interpreted as a Bayesian Belief Network and also is in many respects similar to the stochastic visual grammar described by Mjolsness. 1 INTRODUCTION Modelbased object recognition solves the problem of invariant recognition by relying on stored prototypes at unit scale positioned at the ori...