Results 1  10
of
175
Image Quilting for Texture Synthesis and Transfer
, 2001
"... We present a simple imagebased method of generating novel visual appearance in which a new image is synthesized by stitching together small patches of existing images. We call this process image quilting. First, we use quilting as a fast and very simple texture synthesis algorithm which produces s ..."
Abstract

Cited by 697 (19 self)
 Add to MetaCart
We present a simple imagebased method of generating novel visual appearance in which a new image is synthesized by stitching together small patches of existing images. We call this process image quilting. First, we use quilting as a fast and very simple texture synthesis algorithm which produces surprisingly good results for a wide range of textures. Second, we extend the algorithm to perform texture transfer  rendering an object with a texture taken from a different object. More generally, we demonstrate how an image can be rerendered in the style of a different image. The method works directly on the images and does not require 3D information.
A Parametric Texture Model based on Joint Statistics of Complex Wavelet Coefficients
 INTERNATIONAL JOURNAL OF COMPUTER VISION
, 2000
"... We present a universal statistical model for texture images in the context of an overcomplete complex wavelet transform. The model is parameterized by a set of statistics computed on pairs of coefficients corresponding to basis functions at adjacent spatial locations, orientations, and scales. We de ..."
Abstract

Cited by 409 (13 self)
 Add to MetaCart
(Show Context)
We present a universal statistical model for texture images in the context of an overcomplete complex wavelet transform. The model is parameterized by a set of statistics computed on pairs of coefficients corresponding to basis functions at adjacent spatial locations, orientations, and scales. We develop an efficient algorithm for synthesizing random images subject to these constraints, by iteratively projecting onto the set of images satisfying each constraint, and we use this to test the perceptual validity of the model. In particular, we demonstrate the necessity of subgroups of the parameter set by showing examples of texture synthesis that fail when those parameters are removed from the set. We also demonstrate the power of our model by successfully synthesizing examples drawn from a diverse collection of artificial and natural textures.
Dynamic Textures
, 2002
"... Dynamic textures are sequences of images of moving scenes that exhibit certain stationarity properties in time; these include seawaves, smoke, foliage, whirlwind etc. We present a novel characterization of dynamic textures that poses the problems of modeling, learning, recognizing and synthesizing ..."
Abstract

Cited by 370 (18 self)
 Add to MetaCart
Dynamic textures are sequences of images of moving scenes that exhibit certain stationarity properties in time; these include seawaves, smoke, foliage, whirlwind etc. We present a novel characterization of dynamic textures that poses the problems of modeling, learning, recognizing and synthesizing dynamic textures on a firm analytical footing. We borrow tools from system identification to capture the "essence" of dynamic textures; we do so by learning (i.e. identifying) models that are optimal in the sense of maximum likelihood or minimum prediction error variance. For the special case of secondorder stationary processes, we identify the model suboptimally in closedform. Once learned, a model has predictive power and can be used for extrapolating synthetic sequences to infinite length with negligible computational cost. We present experimental evidence that, within our framework, even lowdimensional models can capture very complex visual phenomena.
Texture analysis and classification with treestructured wavelet transform
 IEEE Trans. Image Processing
, 1993
"... AbstractOne difficulty of texture analysis in the past was the lack of adequate tools to characterize different scales of textures effectively. Recent developments in multiresolution analysis such as the Gabor and wavelet transforms help to overcome this difficulty. In this research, we propose a m ..."
Abstract

Cited by 314 (1 self)
 Add to MetaCart
(Show Context)
AbstractOne difficulty of texture analysis in the past was the lack of adequate tools to characterize different scales of textures effectively. Recent developments in multiresolution analysis such as the Gabor and wavelet transforms help to overcome this difficulty. In this research, we propose a multiresolution approach based on a modified wavelet transform called the treestructured wavelet transform or wavelet packets for texture analysis and classification. The development of this new transform is motivated by the observation that a large class of natural textures can be modeled as quasiperiodic signals whose dominant frequencies are located in the middle frequency channels. With the transform, we are able to zoom into any desired frequency channels for further decomposition. In contrast, the conventional pyramidstructured wavelet transform performs further decomposition only in low frequency channels. We develop a progressive texture classification algorithm which is not only computationally attractive but also has excellent performance. The performance of our new method is compared with that of several other methods using the DCT, DST, DHT, pyramidstructured wavelet transforms, Gabor filters, and Laws filters.
Filters, Random Fields and Maximum Entropy . . .
 INTERNATIONAL JOURNAL OF COMPUTER VISION
, 1998
"... This article presents a statistical theory for texture modeling. This theory combines filtering theory and Markov random field modeling through the maximum entropy principle, and interprets and clarifies many previous concepts and methods for texture analysis and synthesis from a unified point of vi ..."
Abstract

Cited by 233 (17 self)
 Add to MetaCart
(Show Context)
This article presents a statistical theory for texture modeling. This theory combines filtering theory and Markov random field modeling through the maximum entropy principle, and interprets and clarifies many previous concepts and methods for texture analysis and synthesis from a unified point of view. Our theory characterizes the ensemble of images I with the same texture appearance by a probability distribution f (I) on a random field, and the objective of texture modeling is to make inference about f (I), given a set of observed texture examples. In our theory, texture modeling consists of two steps. (1) A set of filters is selected from a general filter bank to capture features of the texture, these filters are applied to observed texture images, and the histograms of the filtered images are extracted. These histograms are estimates of the marginal distributions of f (I). This step is called feature extraction. (2) The maximum entropy principle is employed to derive a distribution p(I), which is restricted to have the same marginal distributions as those in (1). This p(I) is considered as an estimate of f (I). This step is called feature fusion. A stepwise algorithm is proposed to choose filters from a general filter bank. The resulting model, called FRAME (Filters, Random fields And Maximum Entropy), is a Markov random field (MRF) model, but with a much enriched vocabulary and hence much stronger descriptive ability than the previous MRF models used for texture modeling. Gibbs sampler is adopted to synthesize texture images by drawing typical samples from p(I), thus the model is verified by seeing whether the synthesized texture images have similar visual appearances
Minimax Entropy Principle and Its Application to Texture Modeling
, 1997
"... This article proposes a general theory and methodology, called the minimax entropy principle, for building statistical models for images (or signals) in a variety of applications. This principle consists of two parts. The first is the maximum entropy principle for feature binding (or fusion): for a ..."
Abstract

Cited by 226 (46 self)
 Add to MetaCart
This article proposes a general theory and methodology, called the minimax entropy principle, for building statistical models for images (or signals) in a variety of applications. This principle consists of two parts. The first is the maximum entropy principle for feature binding (or fusion): for a certain set of feature statistics, a distribution can be built to bind these feature statistics together by maximizing the entropy over all distributions that reproduce these feature statistics. The second part is the minimum entropy principle for feature selection: among all plausible sets of feature statistics, we choose the set whose maximum entropy distribution has the minimum entropy. Computational and inferential issues in both parts are addressed, in particular, a feature pursuit procedure is proposed for approximately selecting the optimal set of features. The model complexity is restricted because of the sample variation in the observed feature statistics. The minimax entropy principle is applied to texture modeling, where a novel Markov random field (MRF) model, called FRAME (Filter, Random field, And Minimax Entropy), is derived, and encouraging results are obtained in experiments on a variety of texture images. Relationship between our theory and the mechanisms of neural computation is also discussed.
A sparse texture representation using local affine regions
 IEEE Transactions on Pattern Analysis and Machine Intelligence
, 2005
"... This article introduces a texture representation suitable for recognizing images of textured surfaces under a wide range of transformations, including viewpoint changes and nonrigid deformations. At the feature extraction stage, a sparse set of affine Harris and Laplacian regions is found in the im ..."
Abstract

Cited by 202 (15 self)
 Add to MetaCart
(Show Context)
This article introduces a texture representation suitable for recognizing images of textured surfaces under a wide range of transformations, including viewpoint changes and nonrigid deformations. At the feature extraction stage, a sparse set of affine Harris and Laplacian regions is found in the image. Each of these regions can be thought of as a texture element having a characteristic elliptic shape and a distinctive appearance pattern. This pattern is captured in an affineinvariant fashion via a process of shape normalization followed by the computation of two novel descriptors, the spin image and the RIFT descriptor. When affine invariance is not required, the original elliptical shape serves as an additional discriminative feature for texture recognition. The proposed approach is evaluated in retrieval and classification tasks using the entire Brodatz database and a publicly available collection of 1000 photographs of textured surfaces taken from different viewpoints.
Visual Attention
 In B. Goldstein (Ed.), Blackwell Handbook of Perception
, 2001
"... Spatial attention: Visual selection and deployment over space The attentional spotlight and spatial cueing Attentional shifts, splits, and resolution Objectbased Selection The visual search paradigm Topdown and bottomup control of attention Inhibitory mechanisms of attention Invalid cueing Negati ..."
Abstract

Cited by 96 (4 self)
 Add to MetaCart
(Show Context)
Spatial attention: Visual selection and deployment over space The attentional spotlight and spatial cueing Attentional shifts, splits, and resolution Objectbased Selection The visual search paradigm Topdown and bottomup control of attention Inhibitory mechanisms of attention Invalid cueing Negative priming Inhibition of return Temporal attention: Visual selection and deployment over time Single target search Attentional blink and attentional dwell time Repetition blindness NEURAL MECHANISMS OF SELECTION Singlecell physiological method Eventrelated potentials Functional imaging: PET and fMRI
Texture Analysis of SAR Sea Ice Imagery using Gray Level Cooccurrence Matrices
 IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING
, 1999
"... This paper presents a preliminary study for mapping sea ice patterns (texture) with 100m ERS1 synthetic aperture radar (SAR) imagery. We used graylevel cooccurrence matrices (GLCM) to quantitatively evaluate textural parameters and representations and to determine which parameter values and rep ..."
Abstract

Cited by 96 (3 self)
 Add to MetaCart
(Show Context)
This paper presents a preliminary study for mapping sea ice patterns (texture) with 100m ERS1 synthetic aperture radar (SAR) imagery. We used graylevel cooccurrence matrices (GLCM) to quantitatively evaluate textural parameters and representations and to determine which parameter values and representations are best for mapping sea ice texture. We conducted experiments on the quantization levels of the image and the displacement and orientation values of the GLCM by examining the effects textural descriptors such as entropy have in the representation of different sea ice textures. We showed that a complete graylevel representation of the image is not necessary for texture mapping, an eightlevel quantization representation is undesirable for textural representation, and the displacement factor in texture measurements is more important than orientation. In addition, we developed three GLCM implementations and