Results 11  20
of
590
Fast and Globally Convergent Pose Estimation From Video Images
, 1998
"... Determining the rigid transformation relating 2D images to known 3D geometry is a classical problem in photogrammetry and computer vision. Heretofore, the best methods for solving the problem have relied on iterative optimization methods which cannot be proven to converge and/or which do not effecti ..."
Abstract

Cited by 152 (6 self)
 Add to MetaCart
(Show Context)
Determining the rigid transformation relating 2D images to known 3D geometry is a classical problem in photogrammetry and computer vision. Heretofore, the best methods for solving the problem have relied on iterative optimization methods which cannot be proven to converge and/or which do not effectively account for the orthonormal structure of rotation matrices. We show that the pose estimation problem can be formulated as that of minimizing an error metric based on collinearity in object (as opposed to image) space. Using object space collinearity error, we derive an iterative algorithm which directly computes orthogonal rotation matrices and which is globally convergent. Experimentally, we show that the method is computationally efficient, that it is no less accurate than the best currently employed optimization methods, and that it outperforms all tested methods in robustness to outliers. ChienPing Lu, Silicon Graphics Inc. cplu@engr.sgi.com y Greg Hager, Department of Computer...
Fast Multiscale Image Segmentation
"... We introduce a fast, multiscale algorithm for image segmentation. Our algorithm uses modern numeric techniques to nd an approximate solution to normalized cut measures in time that is linear in the size of the image with only a few dozen operations per pixel. In just one pass the algorithm provides ..."
Abstract

Cited by 135 (13 self)
 Add to MetaCart
We introduce a fast, multiscale algorithm for image segmentation. Our algorithm uses modern numeric techniques to nd an approximate solution to normalized cut measures in time that is linear in the size of the image with only a few dozen operations per pixel. In just one pass the algorithm provides a complete hierarchical decomposition of the image into segments. The algorithm detects the segments by applying a process of recursive coarsening in which the same minimization problem is represented with fewer and fewer variables producing an irregular pyramid. During this coarsening process we may compute additional internal statistics of the emerging segments and use these statistics to facilitate the segmentation process. Once the pyramid is completed it is scanned from the top down to associate pixels close to the boundaries of segments with the appropriate segment. The algorithm is inspired by algebraic multigrid (AMG) solvers of minimization problems of heat or electric networks. We demonstrate the algorithm by applying it to real images.
Expert System for Automatic Analysis of Facial Expressions
, 2000
"... This paper discusses our expert system called Integrated System for Facial Expression Recognition (ISFER), which performs recognition and emotional classification of human facial expression from a still fullface image. The system consists of two major parts. The first one is the ISFER Workbench, wh ..."
Abstract

Cited by 121 (16 self)
 Add to MetaCart
This paper discusses our expert system called Integrated System for Facial Expression Recognition (ISFER), which performs recognition and emotional classification of human facial expression from a still fullface image. The system consists of two major parts. The first one is the ISFER Workbench, which forms a framework for hybrid facial feature detection. Multiple feature detection techniques are applied in parallel. The redundant information is used to define unambiguous face geometry containing no missing or highly inaccurate data. The second part of the system is its inference engine called HERCULES, which converts low level face geometry into high level facial actions, and then this into highest level weighted emotion labels.
Robust clustering methods: a unified view
 IEEE Transactions on Fuzzy Systems
, 1997
"... Abstract—Clustering methods need to be robust if they are to be useful in practice. In this paper, we analyze several popular robust clustering methods and show that they have much in common. We also establish a connection between fuzzy set theory and robust statistics and point out the similarities ..."
Abstract

Cited by 111 (8 self)
 Add to MetaCart
(Show Context)
Abstract—Clustering methods need to be robust if they are to be useful in practice. In this paper, we analyze several popular robust clustering methods and show that they have much in common. We also establish a connection between fuzzy set theory and robust statistics and point out the similarities between robust clustering methods and statistical methods such as the weighted leastsquares (LS) technique, the M estimator, the minimum volume ellipsoid (MVE) algorithm, cooperative robust estimation (CRE), minimization of probability of randomness (MINPRAN), and the epsilon contamination model. By gleaning the common principles upon which the methods proposed in the literature are based, we arrive at a unified view of robust clustering methods. We define several general concepts that are useful in robust clustering, state the robust clustering problem in terms of the defined concepts, and propose generic algorithms and guidelines for clustering noisy data. We also discuss why the generalized Hough transform is a suboptimal solution to the robust clustering problem. Index Terms — Clustering validity, fuzzy clustering, robust methods.
Spatial/spectral endmember extraction by multidimensional morphological operations
 IEEE Transactions on Geoscience and Remote Sensing
"... Abstract—Spectral mixture analysis provides an efficient mechanism for the interpretation and classification of remotely sensed multidimensional imagery. It aims to identify a set of reference signatures (also known as endmembers) that can be used to model the reflectance spectrum at each pixel of t ..."
Abstract

Cited by 105 (48 self)
 Add to MetaCart
(Show Context)
Abstract—Spectral mixture analysis provides an efficient mechanism for the interpretation and classification of remotely sensed multidimensional imagery. It aims to identify a set of reference signatures (also known as endmembers) that can be used to model the reflectance spectrum at each pixel of the original image. Thus, the modeling is carried out as a linear combination of a finite number of ground components. Although spectral mixture models have proved to be appropriate for the purpose of large hyperspectral dataset subpixel analysis, few methods are available in the literature for the extraction of appropriate endmembers in spectral unmixing. Most approaches have been designed from a spectroscopic viewpoint and, thus, tend to neglect the existing spatial correlation between pixels. This paper presents a new automated method that performs unsupervised pixel purity determination and endmember extraction from multidimensional datasets; this is achieved by using both spatial and spectral information in a combined manner. The method is based on mathematical morphology, a classic image processing technique that can be applied to the spectral domain while being able to keep its spatial characteristics. The proposed methodology is evaluated through a specifically designed framework that uses both simulated and real hyperspectral data. Index Terms—Automated endmember extraction, mathematical morphology, morphological eccentricity index, multidimensional analysis, spatial/spectral integration, spectral mixture model. I.
Image Segmentation Using Active Contours: Calculus Of Variations Or Shape Gradients?
 SIAM Applied Mathematics
, 2002
"... We consider the problem of segmenting an image through the minimization of an energy criterion involving region and boundary functionals. We show that one can go from one class to the other by solving Poisson's or Helmholtz's equation with wellchosen boundary conditions. Using this equiva ..."
Abstract

Cited by 104 (30 self)
 Add to MetaCart
(Show Context)
We consider the problem of segmenting an image through the minimization of an energy criterion involving region and boundary functionals. We show that one can go from one class to the other by solving Poisson's or Helmholtz's equation with wellchosen boundary conditions. Using this equivalence, we study the case of a large class of region functionals by standard methods of the calculus of variations and derive the corresponding EulerLagrange equations. We revisit this problem using the notion of shape derivative and show that the same equations can be elegantly derived without going through the unnatural step of converting the region integrals into boundary integrals. We also define a larger class of region functionals based on the estimation and comparison to a prototype of the probability density distribution of image features and show how the shape derivative tool allows us to easily compute the corresponding Gateaux derivatives and EulerLagrange equations. We finally apply this new functional to the problem of regions segmentation in sequences of color images. We briefly describe our numerical scheme and show some experimental results.
Classification with NonMetric Distances: Image Retrieval and Class Representation
, 2000
"... One of the key problems in appearancebased vision is understanding how to use a set of labeled images to classify new images. Classification systems that can model human performance, or that use robust image matching methods, often make use of similarity judgments that are nonmetric; but when the ..."
Abstract

Cited by 103 (1 self)
 Add to MetaCart
One of the key problems in appearancebased vision is understanding how to use a set of labeled images to classify new images. Classification systems that can model human performance, or that use robust image matching methods, often make use of similarity judgments that are nonmetric; but when the triangle inequality is not obeyed, most existing pattern recognition techniques are not applicable. We note that exemplarbased (or nearestneighbor) methods can be applied naturally when using a wide class of nonmetric similarity functions. The key issue, however, is to find methods for choosing good representatives of a class that accurately characterize it. We show that existing condensing techniques for finding class representatives are illsuited to deal with nonmetric dataspaces. We then focus on developing techniques for solving this problem, emphasizing two points: First, we show that the distance between two images is not a good measure of how well one image can represent ...
Edge Detection with Embedded Confidence
 IEEE Trans. Pattern Anal. Machine Intell
, 2001
"... Computing the weighted average of the pixel values in a window is a basic module in many computer vision operators. The process is reformulated in a linear vector space and the role of the different subspaces is emphasized. Within this framework well known artifacts of the gradient based edge dete ..."
Abstract

Cited by 96 (1 self)
 Add to MetaCart
Computing the weighted average of the pixel values in a window is a basic module in many computer vision operators. The process is reformulated in a linear vector space and the role of the different subspaces is emphasized. Within this framework well known artifacts of the gradient based edge detectors, such as large spurious responses can be explained quantitatively. It is also shown that template matching with a template derived from the input data is meaningful since it provides an independent measure of confidence in the presence of the employed edge model. The widely used threestep edge detection procedure: gradient estimation, nonmaxima suppression, hysteresis thresholding; is generalized to include the information provided by the confidence measure. The additional amount of computation is minimal and experiments with several standard test images show the ability of the new procedure to detect weak edges. Keywords: edge detection, performance assessment, gradient estimation, window operators 1
Rapid automated threedimensional tracing of neurons from confocal image stacks
 IEEE Transactions on Information Technology in Biomedicine
, 2002
"... Abstract—Algorithms are presented for fully automatic threedimensional (3D) tracing of neurons that are imaged by fluorescence confocal microscopy. Unlike previous voxelbased skeletonization methods, the present approach works by recursively following the neuronal topology, using a set of R dire ..."
Abstract

Cited by 84 (14 self)
 Add to MetaCart
(Show Context)
Abstract—Algorithms are presented for fully automatic threedimensional (3D) tracing of neurons that are imaged by fluorescence confocal microscopy. Unlike previous voxelbased skeletonization methods, the present approach works by recursively following the neuronal topology, using a set of R directional kernels (e.g., a QP), guided by a generalized 3D cylinder model. This method extends our prior work on exploratory tracing of retinal vasculature to 3D space. Since the centerlines are of primary interest, the 3D extension can be accomplished by four rather than six sets of kernels. Additional modifications, such as dynamic adaptation of the correlation kernels, and adaptive step size estimation, were introduced for achieving robustness to photon noise, varying contrast, and apparent discontinuity and/or hollowness of structures. The end product is a labeling of all somas present, graphtheoretic representations of all dendritic/axonal structures, and image statistics such as soma volume and centroid, soma interconnectivity, the longest branch, and lengths of all graph branches originating from a soma. This method is able to work directly with unprocessed confocal images, without expensive deconvolution or other preprocessing. It is much faster that skeletonization, typically consuming less than a minute to trace a 70MB image on a 500MHz computer. These properties make it attractive for largescale automated tissue studies that require rapid online image analysis, such as highthroughput neurobiology/angiogenesis assays, and initiatives such as the Human Brain Project. Index Terms—Aotomated morphometry, micrograph analysis, neuron tracint, threedimensional (3D) image filtering, threedimensional (3D) vectorization. I.