Results 1  10
of
124
A Survey of Image Registration Techniques
 ACM Computing Surveys
, 1992
"... Registration is a fundamental task in image processing used to match two or more pictures taken, for example, at different times, from different sensors or from different viewpoints. Over the years, a broad range of techniques have been developed for the various types of data and problems. These ..."
Abstract

Cited by 887 (2 self)
 Add to MetaCart
Registration is a fundamental task in image processing used to match two or more pictures taken, for example, at different times, from different sensors or from different viewpoints. Over the years, a broad range of techniques have been developed for the various types of data and problems. These techniques have been independently studied for several different applications resulting in a large body of research. This paper organizes this material by establishing the relationship between the distortions in the image and the type of registration techniques which are most suitable. Two major types of distortions are distinguished. The first type are those which are the source of misregistration, i.e., they are the cause of the misalignment between the two images. Distortions which are the source of misregistration determine the transformation class which will optimally align the two images. The transformation class in turn influences the general technique that should be taken....
Boundary Finding with Parametrically Deformable Models
, 1992
"... Introduction This work describes an approach to finding objects in images based on deformable shape models. Boundary finding in two and three dimensional images is enhanced both by considering the bounding contour or surface as a whole and by using modelbased shape information. Boundary finding u ..."
Abstract

Cited by 303 (6 self)
 Add to MetaCart
Introduction This work describes an approach to finding objects in images based on deformable shape models. Boundary finding in two and three dimensional images is enhanced both by considering the bounding contour or surface as a whole and by using modelbased shape information. Boundary finding using only local information has often been frustrated by poorcontrast boundary regions due to occluding and occluded objects, adverse viewing conditions and noise. Imperfect image data can be augmented with the extrinsic information that a geometric shape model provides. In order to exploit modelbased information to the fullest extent, it should be incorporated explicitly, specifically, and early in the analysis. In addition, the bounding curve or surface can be profitably considered as a whole, rather than as curve or surface segments, because it tends to result in a more consistent solution overall. These models are best suited for objects whose diversity and irregularity of shape make
Feature Extraction Methods For Character Recognition  A Survey
, 1995
"... This paper presents an overview of feature extraction methods for offline recognition of segmented (isolated) characters. Selection of a feature extraction method is probably the single most important factor in achieving high recognition performance in character recognition systems. Different featu ..."
Abstract

Cited by 239 (3 self)
 Add to MetaCart
(Show Context)
This paper presents an overview of feature extraction methods for offline recognition of segmented (isolated) characters. Selection of a feature extraction method is probably the single most important factor in achieving high recognition performance in character recognition systems. Different feature extraction methods are designed for different representations of the characters, such as solid binary characters, character contours, skeletons (thinned characters), or gray level subimages of each individual character. The feature extraction methods are discussed in terms of invariance properties, reconstructability, and expected distortions and variability of the characters. The problem of choosing the appropriate feature extraction method for a given application is also discussed. When a few promising feature extraction methods have been identified, they need to be evaluated experimentally to find the best method for the given application. Feature extraction Optical character recogniti...
GoalDirected Evaluation of Binarization Methods
, 1995
"... This paper presents a methodology for evaluation of lowlevel image analysis methods, using binarization (twolevel thresholding) as an example. Binarization of scanned gray scale images is the first step in most document image analysis systems. Selection of an appropriate binarization method for an ..."
Abstract

Cited by 174 (9 self)
 Add to MetaCart
This paper presents a methodology for evaluation of lowlevel image analysis methods, using binarization (twolevel thresholding) as an example. Binarization of scanned gray scale images is the first step in most document image analysis systems. Selection of an appropriate binarization method for an input image domain is a difficult problem. Typically, a human expert evaluates the binarized images according to his/her visual criteria. However, to conduct an objective evaluation, one needs to investigate how well the subsequent image analysis steps will perform on the binarized image. We call this approach goaldirected evaluation, and it can be used to evaluate other lowlevel image processing methods as well. Our evaluation of binarization methods is in the context of digit recognition, so we define the performance of the character recognition module as the objective measure. Eleven different locally adaptive binarization methods were evaluated, and Niblack's method gave the best perf...
Application of affineinvariant fourier descriptors to recognition of 3d objects
 IEEE Transactions on Pattern Analysis and Machine Intelligence
, 1990
"... AbstractIn this work, the method of Fourier descriptors has been extended to produce a set of normalized coefficients which are invariant under any affine transformation (translation, rotation, scaling, and shearing). The method is based on a parameterized boundary description which is transforme ..."
Abstract

Cited by 92 (2 self)
 Add to MetaCart
AbstractIn this work, the method of Fourier descriptors has been extended to produce a set of normalized coefficients which are invariant under any affine transformation (translation, rotation, scaling, and shearing). The method is based on a parameterized boundary description which is transformed to the Fourier domain and normalized there to eliminate dependencies on the affine transformation and on the starting point. Invariance to affine transforms allows considerable robustness when applied to images of objects which rotate in all three dimensions. This is demonstrated by processing silhouettes of aircraft as the aircraft maneuver in threespace. Zndex TermsAffine transformation, features, Fourier descriptors, invariants, shape, 3D parameter estimation, 2D parameter determination. I. INTRODUCTION AND BACKGROUND
Segmentation of 2D and 3D objects from MRI volume data using constrained elastic deformations of flexible Fourier contour and surface models
, 1996
"... ..."
Classical Floorplanning Harmful?
"... Classical floorplanning formulations may lead researchers to solve the wrong problems. This paper points out several examples, including (i) the preoccupation with packingdriven, as opposed to connectivitydriven, problem formulations and benchmarking standards; (ii) the preoccupation with rectangu ..."
Abstract

Cited by 41 (2 self)
 Add to MetaCart
(Show Context)
Classical floorplanning formulations may lead researchers to solve the wrong problems. This paper points out several examples, including (i) the preoccupation with packingdriven, as opposed to connectivitydriven, problem formulations and benchmarking standards; (ii) the preoccupation with rectangular (and L or T shaped) block shapes; and (iii) the lack of attention to algorithm scalability, fixeddie layout requirements, and the overall RTLdown methodology context. The right problem formulations must match the purpose and context of prevailing RTLdown design methodologies, and must be neither overconstrained nor underconstrained. The right solution ingredients are those which are scalable while delivering good solution quality according to relevant metrics. We also describe new problem formulations and solution ingredients, notably a perfect rectilinear floorplanning formulation that seeks zerowhitespace, perfectly packed rectilinear floorplans in a fixeddie regime. The paper closes with a list of questions for future research.
Shape modeling and analysis with entropybased particle systems
 In Proceedings of the 20th International Conference on Information Processing in Medical Imaging
, 2007
"... Many important fields of basic research in medicine and biology routinely employ tools for the statistical analysis of collections of similar shapes. Biologists, for example, have long relied on homologous, anatomical landmarks as shape models to characterize the growth and development of species. I ..."
Abstract

Cited by 25 (13 self)
 Add to MetaCart
(Show Context)
Many important fields of basic research in medicine and biology routinely employ tools for the statistical analysis of collections of similar shapes. Biologists, for example, have long relied on homologous, anatomical landmarks as shape models to characterize the growth and development of species. Increasingly, however, researchers are exploring the use of more detailed models that are derived computationally from threedimensional images and surface descriptions. While computationallyderived models of shape are promising new tools for biomedical research, they also present some significant engineering challenges, which existing modeling methods have only begun to address. In this dissertation, I propose a new computational framework for statistical shape modeling that significantly advances the stateoftheart by overcoming many of the limitations of existing methods. The framework uses a particlesystem representation of shape, with a fast correspondencepoint optimization based on information content. The optimization balances the simplicity of the model (compactness) with the accuracy of the shape representations by using two commensurate entropy
Segmentation of 3D objects from MRI volume data using constrained elastic deformations of flexible Fourier surface models
, 1995
"... This paper describes a new modelbased segmentation technique combining desirable properties of physical models (snakes, [2]), shape representation by Fourier parametrization (Fourier snakes, [12]), and modelling of natural shape variability (eigenmodes, [7, 10]). Flexible shape models are repre ..."
Abstract

Cited by 20 (0 self)
 Add to MetaCart
This paper describes a new modelbased segmentation technique combining desirable properties of physical models (snakes, [2]), shape representation by Fourier parametrization (Fourier snakes, [12]), and modelling of natural shape variability (eigenmodes, [7, 10]). Flexible shape models are represented by a parameter vector describing the mean contour and by a set of eigenmodes of the parameters characterizing the shape variation with respect to a small set of stable landmarks (ACPC in our application) and explaining the remaining variability among a series of images with the model flexibility. Although straightforward, the extension to 3D is severely impeded by finding a proper surface parametrization for arbitrary objects with spherical topology. We apply a newly developed surface parametrization [16, 17] which achieves a uniform mapping between object surface and parameter space. The 3D model building and Fouriersnake procedure are demonstrated by segmenting deep structures of the human brain from MR volume data.
Bimodal System for Interactive Indexing and Retrieval of Pathology Images
 In Proceedings of the 4th IEEE Workshop on Applications of Computer Vision (WACV'98
, 1998
"... The prototype of a system to assist the physicians in differential diagnosis of lymphoproliferative disorders of blood cells from digitized specimens is presented. The user selects the region of interest (ROI) in the image which is then analyzed with a fast, robust color segmenter. Queries in a data ..."
Abstract

Cited by 20 (7 self)
 Add to MetaCart
(Show Context)
The prototype of a system to assist the physicians in differential diagnosis of lymphoproliferative disorders of blood cells from digitized specimens is presented. The user selects the region of interest (ROI) in the image which is then analyzed with a fast, robust color segmenter. Queries in a database of validated cases can be formulated in terms of shape (similarity invariant Fourier descriptors), texture (multiresolution simultaneous autoregressive model), color (L u v space), and area, derived from the delineated ROI. The uncertainty of the segmentation process (obtained through a numerical method) determines the accuracy of shape description (number of Fourier harmonics). Tenfold crossvalidated classification over a database of 261 color 640\Theta480 images was implemented to assess the system performance. The ground truth was obtained through immunophenotyping by flow cytometry. To provide a natural manmachine interface, most input commands are bimodal: either using t...