Results 1  10
of
211
PyramidBased Texture Analysis/Synthesis
, 1995
"... This paper describes a method for synthesizing images that match the texture appearanceof a given digitized sample. This synthesis is completely automatic and requires only the "target" texture as input. It allows generation of as much texture as desired so that any object can be covered. It can be ..."
Abstract

Cited by 385 (0 self)
 Add to MetaCart
This paper describes a method for synthesizing images that match the texture appearanceof a given digitized sample. This synthesis is completely automatic and requires only the "target" texture as input. It allows generation of as much texture as desired so that any object can be covered. It can be used to produce solid textures for creating textured 3d objects without the distortions inherent in texture mapping. It can also be used to synthesize texture mixtures, images that look a bit like each of several digitized samples. The approach is based on a model of human texture perception, and has potential to be a practically useful tool for graphics applications. 1 Introduction Computer renderings of objects with surface texture are more interesting and realistic than those without texture. Texture mapping [15] is a technique for adding the appearance of surface detail by wrapping or projecting a digitized texture image ontoa surface. Digitized textures can be obtained from a variety ...
Image retrieval: Current techniques, promising directions and open issues
 Journal of Visual Communication and Image Representation
, 1999
"... This paper provides a comprehensive survey of the technical achievements in the research area of image retrieval, especially contentbased image retrieval, an area that has been so active and prosperous in the past few years. The survey includes 100+ papers covering the research aspects of image fea ..."
Abstract

Cited by 353 (11 self)
 Add to MetaCart
This paper provides a comprehensive survey of the technical achievements in the research area of image retrieval, especially contentbased image retrieval, an area that has been so active and prosperous in the past few years. The survey includes 100+ papers covering the research aspects of image feature representation and extraction, multidimensional indexing, and system design, three of the fundamental bases of contentbased image retrieval. Furthermore, based on the stateoftheart technology available now and the demand from realworld applications, open research issues are identified and future promising research directions are suggested. C ○ 1999 Academic Press 1.
Dynamic Textures
, 2002
"... Dynamic textures are sequences of images of moving scenes that exhibit certain stationarity properties in time; these include seawaves, smoke, foliage, whirlwind etc. We present a novel characterization of dynamic textures that poses the problems of modeling, learning, recognizing and synthesizing ..."
Abstract

Cited by 286 (15 self)
 Add to MetaCart
Dynamic textures are sequences of images of moving scenes that exhibit certain stationarity properties in time; these include seawaves, smoke, foliage, whirlwind etc. We present a novel characterization of dynamic textures that poses the problems of modeling, learning, recognizing and synthesizing dynamic textures on a firm analytical footing. We borrow tools from system identification to capture the "essence" of dynamic textures; we do so by learning (i.e. identifying) models that are optimal in the sense of maximum likelihood or minimum prediction error variance. For the special case of secondorder stationary processes, we identify the model suboptimally in closedform. Once learned, a model has predictive power and can be used for extrapolating synthetic sequences to infinite length with negligible computational cost. We present experimental evidence that, within our framework, even lowdimensional models can capture very complex visual phenomena.
Geodesic Active Regions and Level Set Methods for Supervised Texture Segmentation
 INTERNATIONAL JOURNAL OF COMPUTER VISION
, 2002
"... This paper presents a novel variational framework to deal with frame partition problems in Computer Vision. This framework exploits boundary and regionbased segmentation modules under a curvebased optimization objective function. The task of supervised texture segmentation is considered to demonst ..."
Abstract

Cited by 234 (8 self)
 Add to MetaCart
This paper presents a novel variational framework to deal with frame partition problems in Computer Vision. This framework exploits boundary and regionbased segmentation modules under a curvebased optimization objective function. The task of supervised texture segmentation is considered to demonstrate the potentials of the proposed framework. The textured feature space is generated by filtering the given textured images using isotropic and anisotropic filters, and analyzing their responses as multicomponent conditional probability density functions. The texture segmentation is obtained by unifying region and boundarybased information as an improved Geodesic Active Contour Model. The defined objective function is minimized using a gradientdescent method where a level set approach is used to implement the obtained PDE. According to this PDE, the curve propagation towards the final solution is guided by boundary and regionbased segmentation forces, and is constrained by a regularity force. The level set implementation is performed using a fast front propagation algorithm where topological changes are naturally handled. The performance of our method is demonstrated on a variety of synthetic and real textured frames.
Filters, Random Fields and Maximum Entropy . . .
 INTERNATIONAL JOURNAL OF COMPUTER VISION
, 1998
"... This article presents a statistical theory for texture modeling. This theory combines filtering theory and Markov random field modeling through the maximum entropy principle, and interprets and clarifies many previous concepts and methods for texture analysis and synthesis from a unified point of vi ..."
Abstract

Cited by 193 (17 self)
 Add to MetaCart
This article presents a statistical theory for texture modeling. This theory combines filtering theory and Markov random field modeling through the maximum entropy principle, and interprets and clarifies many previous concepts and methods for texture analysis and synthesis from a unified point of view. Our theory characterizes the ensemble of images I with the same texture appearance by a probability distribution f (I) on a random field, and the objective of texture modeling is to make inference about f (I), given a set of observed texture examples. In our theory, texture modeling consists of two steps. (1) A set of filters is selected from a general filter bank to capture features of the texture, these filters are applied to observed texture images, and the histograms of the filtered images are extracted. These histograms are estimates of the marginal distributions of f (I). This step is called feature extraction. (2) The maximum entropy principle is employed to derive a distribution p(I), which is restricted to have the same marginal distributions as those in (1). This p(I) is considered as an estimate of f (I). This step is called feature fusion. A stepwise algorithm is proposed to choose filters from a general filter bank. The resulting model, called FRAME (Filters, Random fields And Maximum Entropy), is a Markov random field (MRF) model, but with a much enriched vocabulary and hence much stronger descriptive ability than the previous MRF models used for texture modeling. Gibbs sampler is adopted to synthesize texture images by drawing typical samples from p(I), thus the model is verified by seeing whether the synthesized texture images have similar visual appearances
Minimax Entropy Principle and Its Application to Texture Modeling
, 1997
"... This article proposes a general theory and methodology, called the minimax entropy principle, for building statistical models for images (or signals) in a variety of applications. This principle consists of two parts. The first is the maximum entropy principle for feature binding (or fusion): for a ..."
Abstract

Cited by 193 (39 self)
 Add to MetaCart
This article proposes a general theory and methodology, called the minimax entropy principle, for building statistical models for images (or signals) in a variety of applications. This principle consists of two parts. The first is the maximum entropy principle for feature binding (or fusion): for a certain set of feature statistics, a distribution can be built to bind these feature statistics together by maximizing the entropy over all distributions that reproduce these feature statistics. The second part is the minimum entropy principle for feature selection: among all plausible sets of feature statistics, we choose the set whose maximum entropy distribution has the minimum entropy. Computational and inferential issues in both parts are addressed, in particular, a feature pursuit procedure is proposed for approximately selecting the optimal set of features. The model complexity is restricted because of the sample variation in the observed feature statistics. The minimax entropy principle is applied to texture modeling, where a novel Markov random field (MRF) model, called FRAME (Filter, Random field, And Minimax Entropy), is derived, and encouraging results are obtained in experiments on a variety of texture images. Relationship between our theory and the mechanisms of neural computation is also discussed.
Statistical Models for Images: Compression, Restoration and Synthesis
 In 31st Asilomar Conf on Signals, Systems and Computers
, 1997
"... this paper, we examine the problem of decomposing digitized images, through linear and/or nonlinear transformations, into statistically independent components. The classical approach to such a problem is Principal Components Analysis (PCA), also known as the KarhunenLoeve (KL) or Hotelling transfor ..."
Abstract

Cited by 138 (33 self)
 Add to MetaCart
this paper, we examine the problem of decomposing digitized images, through linear and/or nonlinear transformations, into statistically independent components. The classical approach to such a problem is Principal Components Analysis (PCA), also known as the KarhunenLoeve (KL) or Hotelling transform. This is a linear transform that removes secondorder dependencies between input pixels. The most wellknown description of image statistics is that their power spectra take the form of a power law [e.g., 20, 11, 24]. Coupled with a constraint of translationinvariance, this suggests that the Fourier transform is an appropriate PCA representation. Fourier and related representations are widely used in image processing applications.
Modeling the Joint Statistics of Images in the Wavelet Domain
 IN PROC SPIE, 44TH ANNUAL MEETING
, 1999
"... I describe a statistical model for natural photographic images, when decomposed in a multiscale wavelet basis. In particular, I examine both the marginal and pairwise joint histograms of wavelet coefficients at adjacent spatial locations, orientations, and spatial scales. Although the histograms ar ..."
Abstract

Cited by 98 (3 self)
 Add to MetaCart
I describe a statistical model for natural photographic images, when decomposed in a multiscale wavelet basis. In particular, I examine both the marginal and pairwise joint histograms of wavelet coefficients at adjacent spatial locations, orientations, and spatial scales. Although the histograms are highly nonGaussian, they are nevertheless well described using fairly simple parameterized density models.
Texture characterization via joint statistics of wavelet coefficient magnitudes
, 1998
"... We present a parametric statistical characterization of texture images in the context of an overcomplete complex wavelet frame. The characterization consists of the local autocorrelation of the coefficients in each subband, the local autocorrelation of the cofficent magnitudes, and the crosscorrelat ..."
Abstract

Cited by 93 (7 self)
 Add to MetaCart
We present a parametric statistical characterization of texture images in the context of an overcomplete complex wavelet frame. The characterization consists of the local autocorrelation of the coefficients in each subband, the local autocorrelation of the cofficent magnitudes, and the crosscorrelation of coefficient magnitudes at all orientations and adjacent spatial scales. We develop an efficient algorithm for sampling from an implicit probability density conforming to these statistics, and demonstrate its effectiveness in synthesizing artificial and natural texture images. Many applications in image processing, computer graphics, and computer vision can benefit from a statistical model for visual images. But the dimensionality of the space of images is overwhelmingly large, and thus density inference is very difficult unless one makes several restrictive assumptions.