Results 1  10
of
46
Deterministic Annealing for Clustering, Compression, Classification, Regression, and Related Optimization Problems
 Proceedings of the IEEE
, 1998
"... this paper. Let us place it within the neural network perspective, and particularly that of learning. The area of neural networks has greatly benefited from its unique position at the crossroads of several diverse scientific and engineering disciplines including statistics and probability theory, ph ..."
Abstract

Cited by 247 (11 self)
 Add to MetaCart
this paper. Let us place it within the neural network perspective, and particularly that of learning. The area of neural networks has greatly benefited from its unique position at the crossroads of several diverse scientific and engineering disciplines including statistics and probability theory, physics, biology, control and signal processing, information theory, complexity theory, and psychology (see [45]). Neural networks have provided a fertile soil for the infusion (and occasionally confusion) of ideas, as well as a meeting ground for comparing viewpoints, sharing tools, and renovating approaches. It is within the illdefined boundaries of the field of neural networks that researchers in traditionally distant fields have come to the realization that they have been attacking fundamentally similar optimization problems.
Theoretical Foundations of Transform Coding
, 2001
"... This article explains the fundamental principles of transform coding; these principles apply equally well to images, audio, video, and various other types of data, so abstract formulations are given. Much of the material presented here is adapted from [14, Chap. 2, 4]. The details on wavelet transfo ..."
Abstract

Cited by 67 (6 self)
 Add to MetaCart
This article explains the fundamental principles of transform coding; these principles apply equally well to images, audio, video, and various other types of data, so abstract formulations are given. Much of the material presented here is adapted from [14, Chap. 2, 4]. The details on wavelet transformbased image compression and the JPEG2000 image compression standard are given in the following two articles of this special issue [38], [37]
Factorial coding of natural images: how effective are linear models in removing higherorder dependencies?
 JOURNAL OF THE OPTICAL SOCIETY OF AMERICA A
, 2006
"... The performance of unsupervised learning models for natural images is evaluated quantitatively by means of information theory. We estimate the gain in statistical independence (the multiinformation reduction) achieved with independent component analysis (ICA), principal component analysis (PCA), z ..."
Abstract

Cited by 21 (6 self)
 Add to MetaCart
The performance of unsupervised learning models for natural images is evaluated quantitatively by means of information theory. We estimate the gain in statistical independence (the multiinformation reduction) achieved with independent component analysis (ICA), principal component analysis (PCA), zerophase whitening, and predictive coding. Predictive coding is translated into the transform coding framework, where it can be characterized by the constraint of a triangular filter matrix. A randomly sampled whitening basis and the Haar wavelet are included into the comparison as well. The comparison of all these methods is carried out for different patch sizes, ranging from 2x2 to 16x16 pixels. In spite of large differences in the shape of the basis functions, we find only small differences in the multiinformation between all decorrelation transforms (5% or less) for all patch sizes. Among the secondorder methods, PCA is optimal for small patch sizes and predictive coding performs best for large patch sizes. The extra gain achieved with ICA is always less than 2%. In conclusion, the `edge filters&amp;amp;amp;lsquo; found with ICA lead only to a surprisingly small improvement in terms of its actual objective.
On Optimal EntropyConstrained Scalar Quantization
, 2000
"... Optimal scalar quantization subject to an entropyconstraint is studied. First the problem of nding analytically an optimal entropyconstrained scalar quantizer (ECSQ) is considered. For a wide class of dierence distortion measures including rth power distortions with r > 0, it is proved that if th ..."
Abstract

Cited by 19 (6 self)
 Add to MetaCart
Optimal scalar quantization subject to an entropyconstraint is studied. First the problem of nding analytically an optimal entropyconstrained scalar quantizer (ECSQ) is considered. For a wide class of dierence distortion measures including rth power distortions with r > 0, it is proved that if the source is uniformly distributed over an interval, then for any entropy constraint R (in bits), an optimal quantizer has N = 2 R interval cells such that N 1 cells have equal length d and one cell has length c d. Based on this result, a parametric representation of the minimum achievable distortion D h (R) as a function of the entropy constraint R is obtained for a uniform source. Contrary to earlier expectations, the D h (R) curve turns out to be nonconvex in general. In particular, for the squared error distortion it is shown that D h (R) is a piecewise concave function. The structural properties of optimal ECSQs for more general source distributions are also investigated. In...
Asymptotic Analysis of Optimal FixedRate Uniform Scalar Quantization
 IEEE Trans. Inform. Theory
, 2000
"... This paper studies the asymptotic characteristics of uniform scalar quantizers that are optimal with respect to mean squared error. It is shown that when a symmetric source density with infinite support is sufficiently well behaved, the optimal step size D N for symmetric uniform scalar quantization ..."
Abstract

Cited by 18 (4 self)
 Add to MetaCart
This paper studies the asymptotic characteristics of uniform scalar quantizers that are optimal with respect to mean squared error. It is shown that when a symmetric source density with infinite support is sufficiently well behaved, the optimal step size D N for symmetric uniform scalar quantization decreases as 2s N 1 V 1 1/6N 2 () , where N is the number of quantization levels, s 2 is the source variance and V 1 () is the inverse of V (y) = y 1 P(s 1 X > x)dx y . Equivalently, the optimal support length ND N increases as 2s V 1 1/6N 2 () . Granular distortion is asymptotically well approximated by D N 2 /12, and the ratio of overload to granular distortion converges to a function of the limit t lim y y 1 E[X X > y], provided, as usually happens, that t exists. When it does, its value is related to the number of finite moments of the source density; an asymptotic formula for the overall distortion D N is obtained; and t = 1 is both necessary...
Nonlinear Extraction of Independent Components of Natural Images Using Radial Gaussianization
, 2009
"... We consider the problem of efficiently encoding a signal by transforming it to a new representation whose components are statistically independent. A widely studied linear solution, known as independent component analysis (ICA), exists for the case when the signal is generated as a linear transforma ..."
Abstract

Cited by 14 (4 self)
 Add to MetaCart
We consider the problem of efficiently encoding a signal by transforming it to a new representation whose components are statistically independent. A widely studied linear solution, known as independent component analysis (ICA), exists for the case when the signal is generated as a linear transformation of independent nongaussian sources. Here, we examine a complementary case, in which the source is nongaussian and elliptically symmetric. In this case, no invertible linear transform suffices to decompose the signal into independent components, but we show that a simple nonlinear transformation, which we call radial gaussianization (RG), is able to remove all dependencies. We then examine this methodology in the context of natural image statistics. We first show that distributions of spatially proximal bandpass filter responses are better described as elliptical than as linearly transformed independent sources. Consistent with this, we demonstrate that the reduction in dependency achieved by applying RG to either nearby pairs or blocks of bandpass filter responses is significantly greater than that achieved by ICA. Finally, we show that the RG transformation may be closely approximated by divisive normalization, which has been used to model the nonlinear response properties of visual neurons.
DecisionTheoretic Saliency: Computational Principles, Biological Plausibility, and Implications for Neurophysiology and Psychophysics
, 2009
"... A decisiontheoretic formulation of visual saliency, first proposed for topdown processing (object recognition) (Gao & Vasconcelos, 2005a), is extended to the problem of bottomup saliency. Under this formulation, optimality is defined in the minimum probability of error sense, under a constraint o ..."
Abstract

Cited by 13 (8 self)
 Add to MetaCart
A decisiontheoretic formulation of visual saliency, first proposed for topdown processing (object recognition) (Gao & Vasconcelos, 2005a), is extended to the problem of bottomup saliency. Under this formulation, optimality is defined in the minimum probability of error sense, under a constraint of computational parsimony. The saliency of the visual features at a given location of the visual field is defined as the power of those features to discriminate between the stimulus at the location and a null hypothesis. For bottomup saliency, this is the set of visual features that surround the location under consideration. Discrimination is defined in an informationtheoretic sense and the optimal saliency detector derived for a class of stimuli that complies with known statistical properties of natural images. It is shown that under the assumption that saliency is driven by linear filtering, the optimal detector consists of what is usually referred to as the standard architecture of V1: a cascade of linear filtering, divisive normalization, rectification, and spatial pooling. The optimal detector is also shown to replicate the fundamental properties of the psychophysics of saliency: stimulus popout, saliency asymmetries for stimulus presence versus absence, disregard of feature conjunctions, and Weber’s law. Finally, it is shown that the optimal saliency architecture can be applied to the solution of generic inference problems. In particular, for the class of stimuli studied, it performs the three fundamental operations of statistical inference: assessment of probabilities, implementation of Bayes decision rule, and feature selection.
Low resolution scalar quantization for Gaussian sources and squared error
 IEEE Trans. Info. Theory
, 2006
"... This report considers low resolution scalar quantization. Specifically, it considers entropyconstrained scalar quantization for memoryless Gaussian and Laplacian sources with both squared and absolute error distortion measures. The slope of the operational ratedistortion functions of scalar quantiz ..."
Abstract

Cited by 10 (1 self)
 Add to MetaCart
This report considers low resolution scalar quantization. Specifically, it considers entropyconstrained scalar quantization for memoryless Gaussian and Laplacian sources with both squared and absolute error distortion measures. The slope of the operational ratedistortion functions of scalar quantization for these sources and distortion measures is found. It is shown that in three of the four cases this slope equals the slope of the corresponding Shannon ratedistortion function, which implies that asymptotic low resolution scalar quantization with entropy coding is an optimal coding technique for these three cases. For the case of a Gaussian source and absolute error distortion measure, however, the slope at rate equal zero of the operationalrate distortion function of scalar quantization is infinite, and hence does not match the slope of the corresponding Shannon ratedistortion function. Consequently, scalar quantization is not an optimal coding technique for Gaussian sources and absolute error distortion measure. The results are obtained via analysis of uniform and binary scalar quantizers, which shows that in low resolution their operational ratedistortion functions, in all four cases, are the same as the corresponding operational ratedistortion functions of scalar quantization in general. Lastly, the slope of the Shannon ratedistortion function (the function itself is not known) at rate equal zero is found for a Laplacian source and squared error distortion measure. 1
Statistical model, analysis and approximation of ratedistortion function
 in MPEG4 FGS videos,” in Proc. of SPIE International Conference on Visual Communication and Image Processing (VCIP’05
, 2005
"... Finegranular scalability (FGS) has been accepted as the streaming profile of MPEG4 to provide a flexible foundation for scaling the enhancement layer (EL) to accommodate variable network capacity. To support smooth quality reconstruction of different rate constraints during transmission, it’s sign ..."
Abstract

Cited by 10 (1 self)
 Add to MetaCart
Finegranular scalability (FGS) has been accepted as the streaming profile of MPEG4 to provide a flexible foundation for scaling the enhancement layer (EL) to accommodate variable network capacity. To support smooth quality reconstruction of different rate constraints during transmission, it’s significant to acquire the actual ratedistortion functions (RDF) or curves (RDC) of each frame in MPEG4 FGS videos. In this paper, firstly, we use zeromean generalized Gaussian distributions (GGD) to model the distributions of 64 (8*8) different discrete cosine transform (DCT) coefficients of FGS EL in a frame. Secondly, we decompose and analyze the FGS coding system using quantization theory and ratedistortion theory, and then combine the analysis of each component together to form a complete RDF of the EL. Guided by the above analysis, at last, we introduce a simple and effective ratedistortion (RD) model to approximate the actual RDF of the EL in MPEG4 FGS videos. Extensive experimental results show our statistical model, composition and approximation of actual RDF are efficient and effective. What’s more, our analysis methods are general, and the RDF model can also be used in more widely related RD areas such as rate control algorithms. Keywords: Finegranular scalability (FGS), ratedistortion function (RDF), source model, quantization theory 1.