Results 1  10
of
11
Image Quality Assessment: From Error Visibility to Structural Similarity
 IEEE TRANSACTIONS ON IMAGE PROCESSING
, 2004
"... Objective methods for assessing perceptual image quality have traditionally attempted to quantify the visibility of errors between a distorted image and a reference image using a variety of known properties of the human visual system. Under the assumption that human visual perception is highly adapt ..."
Abstract

Cited by 634 (44 self)
 Add to MetaCart
Objective methods for assessing perceptual image quality have traditionally attempted to quantify the visibility of errors between a distorted image and a reference image using a variety of known properties of the human visual system. Under the assumption that human visual perception is highly adapted for extracting structural information from a scene, we introduce an alternative framework for quality assessment based on the degradation of structural information. As a specific example of this concept, we develop a Structural Similarity Index and demonstrate its promise through a set of intuitive examples, as well as comparison to both subjective ratings and stateoftheart objective methods on a database of images compressed with JPEG and JPEG2000.
Image Quality Assessment: From Error Measurement to Structural Similarity
 IEEE TRANS. IMAGE PROCESSING
, 2004
"... Objective methods for assessing perceptual image quality traditionally attempt to quantify the visibility of errors (differences) between a distorted image and a reference image using a variety of known properties of the human visual system. Under the assumption that human visual perception is highl ..."
Abstract

Cited by 100 (14 self)
 Add to MetaCart
Objective methods for assessing perceptual image quality traditionally attempt to quantify the visibility of errors (differences) between a distorted image and a reference image using a variety of known properties of the human visual system. Under the assumption that human visual perception is highly adapted for extracting structural information from a scene, we introduce an alternative complementary framework for quality assessment based on the degradation of structural information. As a specific example of this concept, we develop a Structural Similarity Index and demonstrate its promise through a set of intuitive examples, as well as comparison to both subjective ratings and stateoftheart objective methods on a database of images compressed with JPEG and JPEG2000. A MatLab implementation of the proposed algorithm is available online at http://www.cns.nyu.edu/~lcv/ssim/.
Video Quality Assessment Based on Structural Distortion Measurement
, 2004
"... Objective image and video quality measures play important roles in a variety of image and video processing applications, such as compression, communication, printing, analysis, registration, restoration, enhancement and watermarking. Most proposed quality assessment approaches in the literature are ..."
Abstract

Cited by 88 (8 self)
 Add to MetaCart
Objective image and video quality measures play important roles in a variety of image and video processing applications, such as compression, communication, printing, analysis, registration, restoration, enhancement and watermarking. Most proposed quality assessment approaches in the literature are error sensitivitybased methods. In this paper, we follow a new philosophy in designing image and video quality metrics, which uses structural distortion as an estimate of perceived visual distortion. A computationally e#cient approach is developed for fullreference (FR) video quality assessment. The algorithm is tested on the video quality experts group (VQEG) Phase I FRTV test data set.
Nonlinear Extraction of Independent Components of Natural Images Using Radial Gaussianization
, 2009
"... We consider the problem of efficiently encoding a signal by transforming it to a new representation whose components are statistically independent. A widely studied linear solution, known as independent component analysis (ICA), exists for the case when the signal is generated as a linear transforma ..."
Abstract

Cited by 14 (3 self)
 Add to MetaCart
We consider the problem of efficiently encoding a signal by transforming it to a new representation whose components are statistically independent. A widely studied linear solution, known as independent component analysis (ICA), exists for the case when the signal is generated as a linear transformation of independent nongaussian sources. Here, we examine a complementary case, in which the source is nongaussian and elliptically symmetric. In this case, no invertible linear transform suffices to decompose the signal into independent components, but we show that a simple nonlinear transformation, which we call radial gaussianization (RG), is able to remove all dependencies. We then examine this methodology in the context of natural image statistics. We first show that distributions of spatially proximal bandpass filter responses are better described as elliptical than as linearly transformed independent sources. Consistent with this, we demonstrate that the reduction in dependency achieved by applying RG to either nearby pairs or blocks of bandpass filter responses is significantly greater than that achieved by ICA. Finally, we show that the RG transformation may be closely approximated by divisive normalization, which has been used to model the nonlinear response properties of visual neurons.
Nonlinear image representation for efficient perceptual coding
 IEEE Trans Image Processing
, 2006
"... Abstract — Image compression systems commonly operate by transforming the input signal into a new representation whose elements are independently quantized. The success of such a system depends on two properties of the representation. First, the coding rate is minimized only if the elements of the r ..."
Abstract

Cited by 13 (9 self)
 Add to MetaCart
Abstract — Image compression systems commonly operate by transforming the input signal into a new representation whose elements are independently quantized. The success of such a system depends on two properties of the representation. First, the coding rate is minimized only if the elements of the representation are statistically independent. Second, the perceived coding distortion is minimized only if the errors in a reconstructed image arising from quantization of the different elements of the representation are perceptually independent. We argue that linear transforms cannot achieve either of these goals, and propose instead an adaptive nonlinear image representation in which each coefficient of a linear transform is divided by a weighted sum of coefficient amplitudes in a generalized neighborhood. We then show that the divisive operation greatly reduces both the statistical and the perceptual redundancy amongst representation elements. We develop an efficient method of inverting this transformation, and we demonstrate through simulations that the dual reduction in dependency can greatly improve the visual quality of compressed images.
Regularization operators for natural images based on nonlinear perception models,” To appear in
 IEEE Transactions on Image Processing
, 2005
"... Abstract—Image restoration requires some a priori knowledge of the solution. Some of the conventional regularization techniques are based on the estimation of the power spectrum density. Simple statistical models for spectral estimation just take into account secondorder relations between the pixel ..."
Abstract

Cited by 7 (4 self)
 Add to MetaCart
Abstract—Image restoration requires some a priori knowledge of the solution. Some of the conventional regularization techniques are based on the estimation of the power spectrum density. Simple statistical models for spectral estimation just take into account secondorder relations between the pixels of the image. However, natural images exhibit additional features, such as particular relationships between local Fourier or wavelet transform coefficients. Biological visual systems have evolved to capture these relations.We propose the use of this biological behavior to build regularization operators as an alternative to simple statistical models. The results suggest that if the penalty operator takes these additional features in natural images into account, it will be more robust and the choice of the regularization parameter is less critical. Index Terms—Early vision models, image restoration, natural image statistics, regularization. I.
Statistically and perceptually motivated nonlinear image representation
 in Proc. SPIE, Conf. on Human Vision and Electr. Imag. XII
"... We describe an invertible nonlinear image transformation that is wellmatched to the statistical properties of photographic images, as well as the perceptual sensitivity of the human visual system. Images are first decomposed using a multiscale oriented linear transformation. In this domain, we dev ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
We describe an invertible nonlinear image transformation that is wellmatched to the statistical properties of photographic images, as well as the perceptual sensitivity of the human visual system. Images are first decomposed using a multiscale oriented linear transformation. In this domain, we develop a Markov random field model based on the dependencies within local clusters of transform coefficients associated with basis functions at nearby positions, orientations and scales. In this model, division of each coefficient by a particular linear combination of the amplitudes of others in the cluster produces a new nonlinear representation with marginally Gaussian statistics. We develop a reliable and efficient iterative procedure for inverting the divisive transformation. Finally, we probe the statistical and perceptual advantages of this image representation, examining robustness to added noise, ratedistortion behavior, and artifactfree local contrast enhancement.
Perceptually and Statistically Decorrelated Features for Image Representation: Application to Transform Coding
"... Transform coding consists of a scalar quantization of the features of an image representation. These features should be independent enough to justify the scalar approach. The coefficients of the commonly used DCT representation still show some dependence that may reduce its efficiency. In this work, ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
Transform coding consists of a scalar quantization of the features of an image representation. These features should be independent enough to justify the scalar approach. The coefficients of the commonly used DCT representation still show some dependence that may reduce its efficiency. In this work, a perceptually inspired nonlinear transform is used to map the DCT into a new representation that largely reduces the statistical and perceptual relations between the coefficients thus improving the compression performance 1 . 1. Introduction Independence between features is a desirable property of a signal representation in many image processing applications (such as indexing, fusion or transform coding) because it allows simple scalar data processing. In particular, in the transform quantization approach to image coding, the original image is represented in a feature space in order to simplify the quantizer design. Assuming that the transform removes the dependencies between coefficie...
PERCEPTUAL REGULARIZATION FUNCTIONALS FOR NATURAL IMAGE RESTORATION
"... Regularization constraints are necessary in inverse problems such as image restoration, optical flow computation or shape from shading to avoid the singularities in the solution. Conventional regularization techniques are based on some a priori knowledge of the solution: usually, the solution is ass ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
Regularization constraints are necessary in inverse problems such as image restoration, optical flow computation or shape from shading to avoid the singularities in the solution. Conventional regularization techniques are based on some a priori knowledge of the solution: usually, the solution is assumed to be smooth according to simple statistical image or motion models. Using the fact that human visual perception is adapted to the statistics of natural images and sequences, the class of regularization functionals proposed in this work are not based on an image model but on a model of the human visual system. In particular, the current nonlinear model of early human visual processing is used to obtain locally adaptive regularization functionals for image restoration without any a priori assumption on the image or the noise. The results show that these functionals constitute a valid alternative to those based on the local autocorrelation of the image. 1.
unknown title
, 2002
"... Linear transform for simultaneous diagonalization ofcovariance and perceptual metric matrix in image coding ..."
Abstract
 Add to MetaCart
Linear transform for simultaneous diagonalization ofcovariance and perceptual metric matrix in image coding