Results 1  10
of
48
The Use of Active Shape Models For Locating Structures in Medical Images
, 1994
"... This paper describes a technique for building compact models of the shape and appearance of flexible objects (such as organs) seen in 2D images. The models are derived from the statistics of sets of labelled images of examples of the objects. ..."
Abstract

Cited by 292 (23 self)
 Add to MetaCart
This paper describes a technique for building compact models of the shape and appearance of flexible objects (such as organs) seen in 2D images. The models are derived from the statistics of sets of labelled images of examples of the objects.
GTM: The generative topographic mapping
 Neural Computation
, 1998
"... Latent variable models represent the probability density of data in a space of several dimensions in terms of a smaller number of latent, or hidden, variables. A familiar example is factor analysis which is based on a linear transformations between the latent space and the data space. In this paper ..."
Abstract

Cited by 280 (5 self)
 Add to MetaCart
Latent variable models represent the probability density of data in a space of several dimensions in terms of a smaller number of latent, or hidden, variables. A familiar example is factor analysis which is based on a linear transformations between the latent space and the data space. In this paper we introduce a form of nonlinear latent variable model called the Generative Topographic Mapping for which the parameters of the model can be determined using the EM algorithm. GTM provides a principled alternative to the widely used SelfOrganizing Map (SOM) of Kohonen (1982), and overcomes most of the significant limitations of the SOM. We demonstrate the performance of the GTM algorithm on a toy problem and on simulated data from flow diagnostics for a multiphase oil pipeline. Copyright c○MIT Press (1998). 1
A New Point Matching Algorithm for NonRigid Registration
, 2002
"... Featurebased methods for nonrigid registration frequently encounter the correspondence problem. Regardless of whether points, lines, curves or surface parameterizations are used, featurebased nonrigid matching requires us to automatically solve for correspondences between two sets of features. I ..."
Abstract

Cited by 237 (2 self)
 Add to MetaCart
Featurebased methods for nonrigid registration frequently encounter the correspondence problem. Regardless of whether points, lines, curves or surface parameterizations are used, featurebased nonrigid matching requires us to automatically solve for correspondences between two sets of features. In addition, there could be many features in either set that have no counterparts in the other. This outlier rejection problem further complicates an already di#cult correspondence problem. We formulate featurebased nonrigid registration as a nonrigid point matching problem. After a careful review of the problem and an indepth examination of two types of methods previously designed for rigid robust point matching (RPM), we propose a new general framework for nonrigid point matching. We consider it a general framework because it does not depend on any particular form of spatial mapping. We have also developed an algorithmthe TPSRPM algorithmwith the thinplate spline (TPS) as the parameterization of the nonrigid spatial mapping and the softassign for the correspondence. The performance of the TPSRPM algorithm is demonstrated and validated in a series of carefully designed synthetic experiments. In each of these experiments, an empirical comparison with the popular iterated closest point (ICP) algorithm is also provided. Finally, we apply the algorithm to the problem of nonrigid registration of cortical anatomical structures which is required in brain mapping. While these results are somewhat preliminary, they clearly demonstrate the applicability of our approach to real world tasks involving featurebased nonrigid registration.
Independent Factor Analysis
 Neural Computation
, 1999
"... We introduce the independent factor analysis (IFA) method for recovering independent hidden sources from their observed mixtures. IFA generalizes and unifies ordinary factor analysis (FA), principal component analysis (PCA), and independent component analysis (ICA), and can handle not only square no ..."
Abstract

Cited by 221 (9 self)
 Add to MetaCart
We introduce the independent factor analysis (IFA) method for recovering independent hidden sources from their observed mixtures. IFA generalizes and unifies ordinary factor analysis (FA), principal component analysis (PCA), and independent component analysis (ICA), and can handle not only square noiseless mixing, but also the general case where the number of mixtures differs from the number of sources and the data are noisy. IFA is a twostep procedure. In the first step, the source densities, mixing matrix and noise covariance are estimated from the observed data by maximum likelihood. For this purpose we present an expectationmaximization (EM) algorithm, which performs unsupervised learning of an associated probabilistic model of the mixing situation. Each source in our model is described by a mixture of Gaussians, thus all the probabilistic calculations can be performed analytically. In the second step, the sources are reconstructed from the observed data by an optimal nonlinear ...
Transformation Invariance in Pattern Recognition  Tangent Distance and Tangent Propagation
 Lecture Notes in Computer Science
, 1998
"... . In pattern recognition, statistical modeling, or regression, the amount of data is a critical factor affecting the performance. If the amount of data and computational resources are unlimited, even trivial algorithms will converge to the optimal solution. However, in the practical case, given ..."
Abstract

Cited by 126 (2 self)
 Add to MetaCart
. In pattern recognition, statistical modeling, or regression, the amount of data is a critical factor affecting the performance. If the amount of data and computational resources are unlimited, even trivial algorithms will converge to the optimal solution. However, in the practical case, given limited data and other resources, satisfactory performance requires sophisticated methods to regularize the problem by introducing a priori knowledge. Invariance of the output with respect to certain transformations of the input is a typical example of such a priori knowledge. In this chapter, we introduce the concept of tangent vectors, which compactly represent the essence of these transformation invariances, and two classes of algorithms, "tangent distance" and "tangent propagation", which make use of these invariances to improve performance. 1 Introduction Pattern Recognition is one of the main tasks of biological information processing systems, and a major challenge of compute...
Determining the Similarity of Deformable Shapes
 Vision Research
, 1995
"... We study how to measure the degree of similarity between two image contours. We propose an approach for comparing contours that takes into account deformations in object shape, the articulation of parts, and variations in the shape and size of portions of objects. Our method uses dynamic programming ..."
Abstract

Cited by 105 (7 self)
 Add to MetaCart
We study how to measure the degree of similarity between two image contours. We propose an approach for comparing contours that takes into account deformations in object shape, the articulation of parts, and variations in the shape and size of portions of objects. Our method uses dynamic programming to compute the minimum cost of bringing one shape into the other via local deformations. Using this as a starting point, we investigate the properties that such a cost function should have to model human performance and to perform usefully in a computer vision system. We suggest novel conditions on this cost function that help capture the partbased nature of objects without requiring any explicit decomposition of shapes into their parts. We then suggest several possible cost functions based on different physical models of contours, and describe experiments with these costs. 1 Introduction Detecting similarity is a key tool in interpretating images. In this paper we develop a measure of s...
New Algorithms for 2D and 3D Point Matching: Pose Estimation and Correspondence
"... A fundamental open problem in computer visiondetermining pose and correspondence between two sets of points in spaceis solved with a novel, fast [O(nm)], robust and easily implementable algorithm. The technique works on noisy 2D or 3D point sets that may be of unequal sizes and may differ by n ..."
Abstract

Cited by 85 (19 self)
 Add to MetaCart
A fundamental open problem in computer visiondetermining pose and correspondence between two sets of points in spaceis solved with a novel, fast [O(nm)], robust and easily implementable algorithm. The technique works on noisy 2D or 3D point sets that may be of unequal sizes and may differ by nonrigid transformations. Using a combination of optimization techniques such as deterministic annealing and the softassign, which have recently emerged out of the recurrent neural network/statistical physics framework, analog objective functions describing the problems are minimized. Over thirty thousand experiments, on randomly generated points sets with varying amounts of noise and missing and spurious points, and on handwritten character sets demonstrate the robustness of the algorithm. Keywords: Pointmatching, pose estimation, correspondence, neural networks, optimization, softassign, deterministic annealing, affine. 1 Introduction Matching the representations of two images has long...
Deformable contours: Modeling and extraction
 IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE
, 1995
"... This paper considers the problem of modeling and extracting arbitrary deformable contours from noisy images. We propose a global contour model based on a stable and regenerative shape matrix, which is invariant and unique under rigid motions. Combined with Markov random field to model local deformat ..."
Abstract

Cited by 81 (2 self)
 Add to MetaCart
This paper considers the problem of modeling and extracting arbitrary deformable contours from noisy images. We propose a global contour model based on a stable and regenerative shape matrix, which is invariant and unique under rigid motions. Combined with Markov random field to model local deformations, this yields prior distribution that exerts influence over a global model while allowing for deformations. We then cast the problem of extraction into posterior estimation and show its equivalence to energy minimization of a generalized active contour model. We discuss pertinent issues in shape training, energy minimization, line search strategies, minimax regularization and initialization by generalized Hough transform. Finally, we present experimental results and compare its performance to rigid template matching.
Classification with NonMetric Distances: Image Retrieval and Class Representation
, 2000
"... One of the key problems in appearancebased vision is understanding how to use a set of labeled images to classify new images. Classification systems that can model human performance, or that use robust image matching methods, often make use of similarity judgments that are nonmetric; but when the ..."
Abstract

Cited by 71 (0 self)
 Add to MetaCart
One of the key problems in appearancebased vision is understanding how to use a set of labeled images to classify new images. Classification systems that can model human performance, or that use robust image matching methods, often make use of similarity judgments that are nonmetric; but when the triangle inequality is not obeyed, most existing pattern recognition techniques are not applicable. We note that exemplarbased (or nearestneighbor) methods can be applied naturally when using a wide class of nonmetric similarity functions. The key issue, however, is to find methods for choosing good representatives of a class that accurately characterize it. We show that existing condensing techniques for finding class representatives are illsuited to deal with nonmetric dataspaces. We then focus on developing techniques for solving this problem, emphasizing two points: First, we show that the distance between two images is not a good measure of how well one image can represent ...