Results 1  10
of
155
Relations between the statistics of natural images and the response properties of cortical cells
 J. Opt. Soc. Am. A
, 1987
"... The relative efficiency of any particular imagecoding scheme should be defined only in relation to the class of images that the code is likely to encounter. To understand the representation of images by the mammalian visual system, it might therefore be useful to consider the statistics of images f ..."
Abstract

Cited by 831 (18 self)
 Add to MetaCart
The relative efficiency of any particular imagecoding scheme should be defined only in relation to the class of images that the code is likely to encounter. To understand the representation of images by the mammalian visual system, it might therefore be useful to consider the statistics of images from the natural environment (i.e., images with trees, rocks, bushes, etc). In this study, various coding schemes are compared in relation to how they represent the information in such natural images. The coefficients of such codes are represented by arrays of mechanisms that respond to local regions of space, spatial frequency, and orientation (Gaborlike transforms). For many classes of image, such codes will not be an efficient means of representing information. However, the results obtained with six natural images suggest that the orientation and the spatialfrequency tuning of mammalian simple cells are well suited for coding the information in such images if the goal of the code is to convert higherorder redundancy (e.g., correlation between the intensities of neighboring pixels) into firstorder redundancy (i.e., the response distribution of the coefficients). Such coding produces a relatively high signaltonoise ratio and permits information to be transmitted with only a subset of the total number of cells. These results support Barlow's theory that the goal of natural vision is to represent the information in the natural environment with minimal redundancy.
High confidence visual recognition of persons by a test of statistical independence
 IEEE TRANS. ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE
, 1993
"... A method for rapid visual recognition of personal identity is described, based on the failure of a statistical test of independence. The most unique phenotypic feature visible in a person’s face is the detailed texture of each eye’s iris: An estimate of its statistical complexity in a sample of the ..."
Abstract

Cited by 621 (8 self)
 Add to MetaCart
A method for rapid visual recognition of personal identity is described, based on the failure of a statistical test of independence. The most unique phenotypic feature visible in a person’s face is the detailed texture of each eye’s iris: An estimate of its statistical complexity in a sample of the human population reveals variation corresponding to several hundred independent degreesoffreedom. Morphogenetic randomness in the texture expressed phenotypically in the iris trabecular meshwork ensures that a test of statistical independence on two coded patterns originating from different eyes is passed almost certainly, whereas the same test is failed almost certainly when the compared codes originate from the same eye. The visible texture of a person’s iris in a realtime video image is encoded into a compact sequence of multiscale quadrature 2D Gabor wavelet coefficients, whose mostsignificant bits comprise a 256byte “iris code. ” Statistical decision theory generates identification decisions from ExclusiveOR comparisons of complete iris codes at the rate of 4000 per second, including calculation of decision confidence levels. The distributions observed empirically in such comparisons imply a theoretical “crossover ” error rate of one in 131000 when a decision criterion is adopted that would equalize the false accept and false reject error rates. In the typical recognition case, given the mean observed degree of iris code agreement, the decision confidence levels correspond formally to a conditional false accept probability of one in about l0^31.
PyramidBased Texture Analysis/Synthesis
, 1995
"... This paper describes a method for synthesizing images that match the texture appearanceof a given digitized sample. This synthesis is completely automatic and requires only the "target" texture as input. It allows generation of as much texture as desired so that any object can be covered. ..."
Abstract

Cited by 480 (0 self)
 Add to MetaCart
(Show Context)
This paper describes a method for synthesizing images that match the texture appearanceof a given digitized sample. This synthesis is completely automatic and requires only the "target" texture as input. It allows generation of as much texture as desired so that any object can be covered. It can be used to produce solid textures for creating textured 3d objects without the distortions inherent in texture mapping. It can also be used to synthesize texture mixtures, images that look a bit like each of several digitized samples. The approach is based on a model of human texture perception, and has potential to be a practically useful tool for graphics applications. 1 Introduction Computer renderings of objects with surface texture are more interesting and realistic than those without texture. Texture mapping [15] is a technique for adding the appearance of surface detail by wrapping or projecting a digitized texture image ontoa surface. Digitized textures can be obtained from a variety ...
A Parametric Texture Model based on Joint Statistics of Complex Wavelet Coefficients
 INTERNATIONAL JOURNAL OF COMPUTER VISION
, 2000
"... We present a universal statistical model for texture images in the context of an overcomplete complex wavelet transform. The model is parameterized by a set of statistics computed on pairs of coefficients corresponding to basis functions at adjacent spatial locations, orientations, and scales. We de ..."
Abstract

Cited by 424 (13 self)
 Add to MetaCart
(Show Context)
We present a universal statistical model for texture images in the context of an overcomplete complex wavelet transform. The model is parameterized by a set of statistics computed on pairs of coefficients corresponding to basis functions at adjacent spatial locations, orientations, and scales. We develop an efficient algorithm for synthesizing random images subject to these constraints, by iteratively projecting onto the set of images satisfying each constraint, and we use this to test the perceptual validity of the model. In particular, we demonstrate the necessity of subgroups of the parameter set by showing examples of texture synthesis that fail when those parameters are removed from the set. We also demonstrate the power of our model by successfully synthesizing examples drawn from a diverse collection of artificial and natural textures.
Texture analysis and classification with treestructured wavelet transform
 IEEE TRANS. IMAGE PROCESSING
, 1993
"... One difficulty of texture analysis in the past was the lack of adequate tools to characterize different scales of textures effectively. Recent developments in multiresolution analysis such as the Gabor and wavelet transforms help to overcome this difficulty. In this research, we propose a multiresol ..."
Abstract

Cited by 319 (1 self)
 Add to MetaCart
(Show Context)
One difficulty of texture analysis in the past was the lack of adequate tools to characterize different scales of textures effectively. Recent developments in multiresolution analysis such as the Gabor and wavelet transforms help to overcome this difficulty. In this research, we propose a multiresolution approach based on a modified wavelet transform called the treestructured wavelet transform or wavelet packets for texture analysis and classification. The development of this new transform is motivated by the observation that a large class of natural textures can be modeled as quasiperiodic signals whose dominant frequencies are located in the middle frequency channels. With the transform, we are able to zoom into any desired frequency channels for further decomposition. In contrast, the conventional pyramidstructured wavelet transform performs further decomposition only in low frequency channels. We develop a progressive texture classification algorithm which is not only computationally attractive but also has excellent performance. The performance of our new method is compared with that of several other methods using the DCT, DST, DHT, pyramidstructured wavelet transforms, Gabor filters, and Laws filters.
Optimal Unsupervised Learning in a SingleLayer Linear Feedforward Neural Network
, 1989
"... A new approach to unsupervised learning in a singlelayer linear feedforward neural network is discussed. An optimality principle is proposed which is based upon preserving maximal information in the output units. An algorithm for unsupervised learning based upon a Hebbian learning rule, which achie ..."
Abstract

Cited by 293 (2 self)
 Add to MetaCart
A new approach to unsupervised learning in a singlelayer linear feedforward neural network is discussed. An optimality principle is proposed which is based upon preserving maximal information in the output units. An algorithm for unsupervised learning based upon a Hebbian learning rule, which achieves the desired optimality is presented, The algorithm finds the eigenvectors of the input correlation matrix, and it is proven to converge with probability one. An implementation which can train neural networks using only local "synaptic" modification rules is described. It is shown that the algorithm is closely related to algorithms in statistics (Factor Analysis and Principal Components Analysis) and neural networks (Selfsupervised Backpropagation, or the "encoder" problem). It thus provides an explanation of certain neural network behavior in terms of classical statistical techniques. Examples of the use of a linear network for solving image coding and texture segmentation problems are presented. Also, it is shown that the algorithm can be used to find "visual receptive fields" which are qualitatively similar to those found in primate retina and visual cortex.
Texture classification by wavelet packet signatures
 IEEE Transaction PAMI
, 1993
"... This paper introduces a new approach tocharacterize textures at multiple scales. The performance of wavelet packet spaces are measured in terms of sensitivity and selectivity for the classi cation of twenty ve natural textures. Both energy and entropy metrics were computed for each wavelet packet a ..."
Abstract

Cited by 210 (3 self)
 Add to MetaCart
(Show Context)
This paper introduces a new approach tocharacterize textures at multiple scales. The performance of wavelet packet spaces are measured in terms of sensitivity and selectivity for the classi cation of twenty ve natural textures. Both energy and entropy metrics were computed for each wavelet packet and incorporated into distinct scale space representations, where each wavelet packet (channel) re ected a speci c scale and orientation sensitivity. Wavelet packet representations for twenty ve natural textures were classi ed without error by a simple twolayer network classi er. An analyzing function of large regularity (D 20) was shown to be slightly more e cient inrepresentation and discrimination than a similar function with fewer vanishing moments (D6). In addition, energy representations computed from the standard wavelet decomposition alone (17 features) provided classi cation without error for the twenty ve textures included in our study. The reliability exhibited by texture signatures based on wavelet packets analysis suggest that the multiresolution properties of such transforms are bene cial for accomplishing segmentation, classication and subtle discrimination of texture. Index Terms{Feature extraction, texture analysis, texture classi cation, wavelet transform, wavelet packet, neural networks.
Comparison of texture features based on gabor filters
 IEEE Trans. on Image Processing
"... Abstract—Texture features that are based on the local power spectrum obtained by a bank of Gabor filters are compared. The features differ in the type of nonlinear postprocessing which is applied to the local power spectrum. The following features are considered: Gabor energy, complex moments, and ..."
Abstract

Cited by 157 (9 self)
 Add to MetaCart
(Show Context)
Abstract—Texture features that are based on the local power spectrum obtained by a bank of Gabor filters are compared. The features differ in the type of nonlinear postprocessing which is applied to the local power spectrum. The following features are considered: Gabor energy, complex moments, and grating cell operator features. The capability of the corresponding operators to produce distinct feature vector clusters for different textures is compared using two methods: the Fisher criterion and the classification result comparison. Both methods give consistent results. The grating cell operator gives the best discrimination and segmentation results. The texture detection capabilities of the operators and their robustness to nontexture features are also compared. The grating cell operator is the only one that selectively responds only to texture and does not give false response to nontexture features such as object contours. Index Terms—Classification, complex moments, discrimination,
Deformable kernels for early vision
, 1991
"... Early vision algorithms often have a first stage of linearfiltering that 'extracts' from the image information at multiple scales of resolution and multiple orientations. A common difficulty in the design and implementation of such schemes is that one feels compelled to discretize coarsel ..."
Abstract

Cited by 145 (10 self)
 Add to MetaCart
Early vision algorithms often have a first stage of linearfiltering that 'extracts' from the image information at multiple scales of resolution and multiple orientations. A common difficulty in the design and implementation of such schemes is that one feels compelled to discretize coarsely the space of scales and orientations in order to reduce computation and storage costs. This discretization produces anisotropies due to a loss of traslation, rotation, scalinginvariance that makes early vision algorithms less precise and more difficult to design. This need not be so: one can compute and store efficiently the response of families of linear filters defined on a continuum of orientations and scales. A technique is presented that allows (1) to compute the best approximation of a given family using linear combinations of a small number of 'basis' functions; (2) to describe all finitedimensional families, i.e. the families of filters for which a finite dimensional representation is possible with no error. The technique is based on singular value decomposition and may be applied to generating filters in arbitrary dimensions. Experimental results are presented that demonstrate the applicability of the technique to generating multiorientation multiscale 2D edgedetection kernels. The implementation issues are also discussed.