Results 1  10
of
110
A search engine for 3d models
 ACM Transactions on Graphics
, 2003
"... As the number of 3D models available on the Web grows, there is an increasing need for a search engine to help people find them. Unfortunately, traditional textbased search techniques are not always effective for 3D data. In this paper, we investigate new shapebased search methods. The key challen ..."
Abstract

Cited by 226 (21 self)
 Add to MetaCart
As the number of 3D models available on the Web grows, there is an increasing need for a search engine to help people find them. Unfortunately, traditional textbased search techniques are not always effective for 3D data. In this paper, we investigate new shapebased search methods. The key challenges are to develop query methods simple enough for novice users and matching algorithms robust enough to work for arbitrary polygonal models. We present a webbased search engine system that supports queries based on 3D sketches, 2D sketches, 3D
Shape Distributions
 ACM Transactions on Graphics
, 2002
"... this paper, we propose and analyze a method for computing shape signatures for arbitrary (possibly degenerate) 3D polygonal models. The key idea is to represent the signature of an object as a shape distribution sampled from a shape function measuring global geometric properties of an object. The pr ..."
Abstract

Cited by 189 (0 self)
 Add to MetaCart
this paper, we propose and analyze a method for computing shape signatures for arbitrary (possibly degenerate) 3D polygonal models. The key idea is to represent the signature of an object as a shape distribution sampled from a shape function measuring global geometric properties of an object. The primary motivation for this approach is to reduce the shape matching problem to the comparison of probability distributions, which is simpler than traditional shape matching methods that require pose registration, feature correspondence, or model fitting
Nonlinear spatial normalization using basis functions
 Human Brain Mapping
, 1999
"... Abstract: We describe a comprehensive framework for performing rapid and automatic nonlabelbased nonlinear spatial normalizations. The approach adopted minimizes the residual squared difference between an image and a template of the same modality. In order to reduce the number of parameters to be f ..."
Abstract

Cited by 148 (16 self)
 Add to MetaCart
Abstract: We describe a comprehensive framework for performing rapid and automatic nonlabelbased nonlinear spatial normalizations. The approach adopted minimizes the residual squared difference between an image and a template of the same modality. In order to reduce the number of parameters to be fitted, the nonlinear warps are described by a linear combination of low spatial frequency basis functions. The objective is to determine the optimum coefficients for each of the bases by minimizing the sum of squared differences between the image and template, while simultaneously maximizing the smoothness of the transformation using a maximum a posteriori (MAP) approach. Most MAP approaches assume that the variance associated with each voxel is already known and that there is no covariance between neighboring voxels. The approach described here attempts to estimate this variance from the data, and also corrects for the correlations between neighboring voxels. This makes the same approach suitable for the spatial normalization of both highquality magnetic resonance images, and lowresolution noisy positron emission tomography images. A fast algorithm has been developed that utilizes Taylor’s theorem and the separable nature of the basis functions, meaning that most of the nonlinear spatial variability between images can be automatically corrected within a few minutes. Hum. Brain Mapping 7:254–266, 1999.
Computable elastic distances between shapes
 SIAM J. of Applied Math
, 1998
"... Abstract. We define distances between geometric curves by the square root of the minimal energy required to transform one curve into the other. The energy is formally defined from a left invariant Riemannian distance on an infinite dimensional group acting on the curves, which can be explicitly comp ..."
Abstract

Cited by 118 (19 self)
 Add to MetaCart
Abstract. We define distances between geometric curves by the square root of the minimal energy required to transform one curve into the other. The energy is formally defined from a left invariant Riemannian distance on an infinite dimensional group acting on the curves, which can be explicitly computed. The obtained distance boils down to a variational problem for which an optimal matching between the curves has to be computed. An analysis of the distance when the curves are polygonal leads to a numerical procedure for the solution of the variational problem, which can efficiently be implemented, as illustrated by experiments.
Unified segmentation
, 2005
"... A probabilistic framework is presented that enables image registration, tissue classification, and bias correction to be combined within the same generative model. A derivation of a loglikelihood objective function for the unified model is provided. The model is based on a mixture of Gaussians and ..."
Abstract

Cited by 111 (9 self)
 Add to MetaCart
A probabilistic framework is presented that enables image registration, tissue classification, and bias correction to be combined within the same generative model. A derivation of a loglikelihood objective function for the unified model is provided. The model is based on a mixture of Gaussians and is extended to incorporate a smooth intensity variation and nonlinear registration with tissue probability maps. A strategy for optimising the model parameters is described, along with the requisite partial derivatives of the objective function.
Determining the Similarity of Deformable Shapes
 Vision Research
, 1995
"... We study how to measure the degree of similarity between two image contours. We propose an approach for comparing contours that takes into account deformations in object shape, the articulation of parts, and variations in the shape and size of portions of objects. Our method uses dynamic programming ..."
Abstract

Cited by 105 (7 self)
 Add to MetaCart
We study how to measure the degree of similarity between two image contours. We propose an approach for comparing contours that takes into account deformations in object shape, the articulation of parts, and variations in the shape and size of portions of objects. Our method uses dynamic programming to compute the minimum cost of bringing one shape into the other via local deformations. Using this as a starting point, we investigate the properties that such a cost function should have to model human performance and to perform usefully in a computer vision system. We suggest novel conditions on this cost function that help capture the partbased nature of objects without requiring any explicit decomposition of shapes into their parts. We then suggest several possible cost functions based on different physical models of contours, and describe experiments with these costs. 1 Introduction Detecting similarity is a key tool in interpretating images. In this paper we develop a measure of s...
Variational Problems on Flows of Diffeomorphisms for Image Matching
, 1998
"... This paper studies a variational formulation of the image matching problem. We consider a scenario in which a canonical representative image T is to be carried via a smooth change of variable into an image which is intended to provide a good fit to the observed data. The images are all defined on a ..."
Abstract

Cited by 104 (17 self)
 Add to MetaCart
This paper studies a variational formulation of the image matching problem. We consider a scenario in which a canonical representative image T is to be carried via a smooth change of variable into an image which is intended to provide a good fit to the observed data. The images are all defined on a compact set G ae IR 3 . The changes of variable are determined as solutions of the nonlinear Eulerian transport equation dj(s; x) ds = v(j(s; x); s); j(ø ; x) = x; (0:1) with the location j(0; x) in the canonical image carried to the location x in the deformed image. The variational problem then takes the form arg min v kvk 2 + Z G jT ffi j(0; x) \Gamma D(x)j 2 dx ; (0:2) where kvk is an appropriate norm on the velocity field v(\Delta; \Delta), and the second term attempts to enforce fidelity to the data. In this paper we derive conditions under which the variational problem described above is well posed. The key issue is the choice of the norm. Conditions are formulated u...
ShapeBased Retrieval: A Case Study with Trademark Image Databases
 Pattern Recognition
, 1998
"... Retrieval efficiency and accuracy are two important issues in designing a contentbased database retrieval system. We propose a method for trademark image database retrieval based on object shape information that would supplement traditional textbased retrieval systems. This system achieves both th ..."
Abstract

Cited by 101 (0 self)
 Add to MetaCart
Retrieval efficiency and accuracy are two important issues in designing a contentbased database retrieval system. We propose a method for trademark image database retrieval based on object shape information that would supplement traditional textbased retrieval systems. This system achieves both the desired efficiency and accuracy using a twostage hierarchy: in the first stage, simple and easily computable shape features are used to quickly browse through the database to generate a moderate number of plausible retrievals when a query is presented; in the second stage, the candidates from the first stage are screened using a deformable template matching process to discard spurious matches. We have tested the algorithm using hand drawn queries on a trademark database containing 1; 100 images. Each retrieval takes a reasonable amount of computation time (¸ 45 seconds on a Sun Sparc 20 workstation). The top most image retrieved by the system agrees with that obtained by human subjects, ...
Diffeomorphisms Groups and Pattern Matching in Image Analysis
, 1995
"... . In a previous paper, the author proposes to see the deformations of a common pattern as the action of an infinite dimensional group. We show in this paper that this approach can be applied numerically for pattern matching in image analysis of digital images. Using Lie group ideas, we construct a d ..."
Abstract

Cited by 90 (9 self)
 Add to MetaCart
. In a previous paper, the author proposes to see the deformations of a common pattern as the action of an infinite dimensional group. We show in this paper that this approach can be applied numerically for pattern matching in image analysis of digital images. Using Lie group ideas, we construct a distance between deformations defined through a metric given the cost of infinitesimal deformations. Then we propose a numerical scheme to solve a variational problem involving this distance and leading to a suboptimal pattern matching. Contents 1. Introduction 1 2. Algorithmic side: gradient descent on AB 5 3. Numerical scheme 7 4. Numerical results 8 5. Conclusion 15 References 15 1. Introduction In [6, 5], we proposed an infinite dimensional group approach for physics based models in pattern recognition. Let us recall the outlines of this approach in the particular case of image analysis we are interested in. Consider that the set of gray The author would like to thank professor Robert...
Learning from one example through shared densities on transforms
 In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition
, 2000
"... We define a process called congealing in which elements of a dataset (images) are brought into correspondence with each other jointly, producing a datadefined model. It is based upon minimizing the summed componentwise (pixelwise) entropies over a continuous set of transforms on the data. One of t ..."
Abstract

Cited by 90 (7 self)
 Add to MetaCart
We define a process called congealing in which elements of a dataset (images) are brought into correspondence with each other jointly, producing a datadefined model. It is based upon minimizing the summed componentwise (pixelwise) entropies over a continuous set of transforms on the data. One of the biproducts of this minimization is a set of transforms, one associated with each original training sample. We then demonstrate a procedure for effectively bringing test data into correspondence with the datadefined model produced in the congealing process. Subsequently, we develop a probability density over the set of transforms that arose from the congealing process. We suggest that this density over transforms may be shared by many classes, and demonstrate how using this density as “prior knowledge ” can be used to develop a classifier based on only a single training example for each class. 1