Results 11  20
of
1,692
Cortical surfacebased analysis. I. Segmentation and surface reconstruction
 Neuroimage
, 1999
"... Several properties of the cerebral cortex, including its columnar and laminar organization, as well as the topographic organization of cortical areas, can only be properly understood in the context of the intrinsic twodimensional structure of the cortical surface. In order to study such cortical pr ..."
Abstract

Cited by 315 (24 self)
 Add to MetaCart
(Show Context)
Several properties of the cerebral cortex, including its columnar and laminar organization, as well as the topographic organization of cortical areas, can only be properly understood in the context of the intrinsic twodimensional structure of the cortical surface. In order to study such cortical properties in humans, it is necessary to obtain an accurate and explicit representation of the cortical surface in individual subjects. Here we describe a set of automated procedures for obtaining accurate reconstructions of the cortical surface, which have been applied to data from more than 100 subjects, requiring little or no manual intervention. Automated routines for unfolding and flattening the cortical surface are described in a companion paper. These procedures allow for the routine use of cortical surfacebased analysis and visualization methods in functional brain imaging.
On the Unification Line Processes, Outlier Rejection, and Robust Statistics with Applications in Early Vision
, 1996
"... The modeling of spatial discontinuities for problems such as surface recovery, segmentation, image reconstruction, and optical flow has been intensely studied in computer vision. While "lineprocess" models of discontinuities have received a great deal of attention, there has been recent ..."
Abstract

Cited by 253 (9 self)
 Add to MetaCart
The modeling of spatial discontinuities for problems such as surface recovery, segmentation, image reconstruction, and optical flow has been intensely studied in computer vision. While "lineprocess" models of discontinuities have received a great deal of attention, there has been recent interest in the use of robust statistical techniques to account for discontinuities. This paper unifies the two approaches. To achieve this we generalize the notion of a "line process" to that of an analog "outlier process" and show how a problem formulated in terms of outlier processes can be viewed in terms of robust statistics. We also characterize a class of robust statistical problems for which an equivalent outlierprocess formulation exists and give a straightforward method for converting a robust estimation problem into an outlierprocess formulation. We show how prior assumptions about the spatial structure of outliers can be expressed as constraints on the recovered analog outlier processes and how traditional continuation methods can be extended to the explicit outlierprocess formulation. These results indicate that the outlierprocess approach provides a general framework which subsumes the traditional lineprocess approaches as well as a wide class of robust estimation problems. Examples in surface reconstruction, image segmentation, and optical flow are presented to illustrate the use of outlier processes and to show how the relationship between outlier processes and robust statistics can be exploited. An appendix provides a catalog of common robust error norms and their equivalent outlierprocess formulations.
Gradient flows and geometric active contour models
 in Proc. of the 5th International Conference on Computer Vision
, 1995
"... In this paper, we analyze the geometric active contour models discussed in [6, 181 from a curve evolution point of view and propose some modifications based on gradient flows relative to certain new featurebased Riemannian metrics. This leads to a novel snake paradigm in which the feature of interes ..."
Abstract

Cited by 234 (18 self)
 Add to MetaCart
In this paper, we analyze the geometric active contour models discussed in [6, 181 from a curve evolution point of view and propose some modifications based on gradient flows relative to certain new featurebased Riemannian metrics. This leads to a novel snake paradigm in which the feature of interest may be considered to lie at the bottom of a potential well. Thus the snake is attracted very naturally and eficiently to the desired feature. Moreover, we consider some 30 active surface models based on these ideas. 1
A Riemannian Framework for Tensor Computing
 INTERNATIONAL JOURNAL OF COMPUTER VISION
, 2006
"... Positive definite symmetric matrices (socalled tensors in this article) are nowadays a common source of geometric information. In this paper, we propose to provide the tensor space with an affineinvariant Riemannian metric. We demonstrate that it leads to strong theoretical properties: the cone of ..."
Abstract

Cited by 228 (22 self)
 Add to MetaCart
Positive definite symmetric matrices (socalled tensors in this article) are nowadays a common source of geometric information. In this paper, we propose to provide the tensor space with an affineinvariant Riemannian metric. We demonstrate that it leads to strong theoretical properties: the cone of positive definite symmetric matrices is replaced by a regular manifold of constant curvature without boundaries (null eigenvalues are at the infinity), the geodesic between two tensors and the mean of a set of tensors are uniquely defined, etc. We have
A geometrical framework for low level vision
 IEEE Trans. on Image Processing
, 1998
"... Abstract—We introduce a new geometrical framework based on which natural flows for image scale space and enhancement are presented. We consider intensity images as surfaces in the space. The image is, thereby, a twodimensional (2D) surface in threedimensional (3D) space for graylevel images, an ..."
Abstract

Cited by 211 (35 self)
 Add to MetaCart
(Show Context)
Abstract—We introduce a new geometrical framework based on which natural flows for image scale space and enhancement are presented. We consider intensity images as surfaces in the space. The image is, thereby, a twodimensional (2D) surface in threedimensional (3D) space for graylevel images, and 2D surfaces in five dimensions for color images. The new formulation unifies many classical schemes and algorithms via a simple scaling of the intensity contrast, and results in new and efficient schemes. Extensions to multidimensional signals become natural and lead to powerful denoising and scale space algorithms. Index Terms — Color image processing, image enhancement, image smoothing, nonlinear image diffusion, scalespace. I.
Simultaneous Structure and Texture Image Inpainting
, 2003
"... An algorithm for the simultaneous fillingin of texture and structure in regions of missing image information is presented in this paper. The basic idea is to first decompose the image into the sum of two functions with different basic characteristics, and then reconstruct each one of these function ..."
Abstract

Cited by 200 (12 self)
 Add to MetaCart
(Show Context)
An algorithm for the simultaneous fillingin of texture and structure in regions of missing image information is presented in this paper. The basic idea is to first decompose the image into the sum of two functions with different basic characteristics, and then reconstruct each one of these functions separately with structure and texture fillingin algorithms. The first function used in the decomposition is of bounded variation, representing the underlying image structure, while the second function captures the texture and possible noise. The region of missing information in the bounded variation image is reconstructed using image inpainting algorithms, while the same region in the texture image is filledin with texture synthesis techniques. The original image is then reconstructed adding back these two subimages. The novel contribution of this paper is then in the combination of these three previously developed components, image decomposition with inpainting and texture synthesis, which permits the simultaneous use of fillingin algorithms that are suited for different image characteristics. Examples on real images show the advantages of this proposed approach.
An ImageBased Approach to ThreeDimensional Computer Graphics
, 1997
"... The conventional approach to threedimensional computer graphics produces images from geometric scene descriptions by simulating the interaction of light with matter. My research explores an alternative approach that replaces the geometric scene description with perspective images and replaces the s ..."
Abstract

Cited by 195 (4 self)
 Add to MetaCart
The conventional approach to threedimensional computer graphics produces images from geometric scene descriptions by simulating the interaction of light with matter. My research explores an alternative approach that replaces the geometric scene description with perspective images and replaces the simulation process with data interpolation. I derive an imagewarping equation that maps the visible points in a reference image to their correct positions in any desired view. This mapping from reference image to desired image is determined by the centerofprojection and pinholecamera model of the two images and by a generalized disparity value associated with each point in the reference image. This generalized disparity value, which represents the structure of the scene, can be determined from point correspondences between multiple reference images. The imagewarping equation alone is insufficient to synthesize desired images because multiple referenceimage points may map to a single point. I derive a new visibility algorithm that determines a drawing order for the image warp. This algorithm results in correct visibility for the desired image independent of the reference image’s contents. The utility of the imagebased approach can be enhanced with a more general pinholecamera
Brook for GPUs: Stream Computing on Graphics Hardware
 ACM TRANSACTIONS ON GRAPHICS
, 2004
"... In this paper, we present Brook for GPUs, a system for generalpurpose computation on programmable graphics hardware. Brook extends C to include simple dataparallel constructs, enabling the use of the GPU as a streaming coprocessor. We present a compiler and runtime system that abstracts and virtua ..."
Abstract

Cited by 194 (9 self)
 Add to MetaCart
In this paper, we present Brook for GPUs, a system for generalpurpose computation on programmable graphics hardware. Brook extends C to include simple dataparallel constructs, enabling the use of the GPU as a streaming coprocessor. We present a compiler and runtime system that abstracts and virtualizes many aspects of graphics hardware. In addition, we present an analysis of the effectiveness of the GPU as a compute engine compared to the CPU, to determine when the GPU can outperform the CPU for a particular algorithm. We evaluate our system with five applications, the SAXPY and SGEMV BLAS operators, image segmentation, FFT, and ray tracing. For these applications, we demonstrate that our Brook implementations perform comparably to handwritten GPU code and up to seven times faster than their CPU counterparts.
Nonlinear Anisotropic Filtering Of MRI Data
, 1992
"... Despite significant improvements in image quality over the past several years, the full exploitation of magnetic resonance image (MRI) data is often limited by low signal to noise ratio (SNR) or contrast to noise ratio (CNR). In implementing new MR techniques, the criteria of acquisition speed and i ..."
Abstract

Cited by 183 (15 self)
 Add to MetaCart
Despite significant improvements in image quality over the past several years, the full exploitation of magnetic resonance image (MRI) data is often limited by low signal to noise ratio (SNR) or contrast to noise ratio (CNR). In implementing new MR techniques, the criteria of acquisition speed and image quality are usually paramount. To decrease noise during the acquisition either time averaging over repeated measurements or enlarging voxel volume may be employed. However these methods either substantially increase the overall acquisition time or scan a spatial volume in only coarse intervals. In contrast to acquisitionbased noise reduction methods we propose a postprocess based on anisotropic diffusion. Extensions of this new technique support 3D and multiecho MRI, incorporating higher spatial and spectral dimensions. The procedure overcomes the major drawbacks of conventional filter methods, namely the blurring of object boundaries and the suppression of fine structural details. T...
Mean Shift Analysis and Applications
, 1999
"... A nonparametric estimator of density gradient, the mean shift, is employed in the joint, spatialrange (value) domain of gray level and color images for discontinuity preserving filtering and image segmentation. Properties of the mean shift are reviewed and its convergence on lattices is proven. The ..."
Abstract

Cited by 181 (8 self)
 Add to MetaCart
(Show Context)
A nonparametric estimator of density gradient, the mean shift, is employed in the joint, spatialrange (value) domain of gray level and color images for discontinuity preserving filtering and image segmentation. Properties of the mean shift are reviewed and its convergence on lattices is proven. The proposed filtering method associates with each pixel in the image the closest local mode in the density distribution of the joint domain. Segmentation into a piecewise constant structure requires only one more step, fusion of the regions associated with nearby modes. The proposed technique has two parameters controlling the resolution in the spatial and range domains. Since convergence is guaranteed, the technique does not require the intervention of the user to stop the filtering at the desired image quality. Several examples, for gray and color images, show the versatilityofthe method and compare favorably with results described in the literature for the same images.