Results 1 - 10
of
184
Compressive sensing
- IEEE Signal Processing Mag
, 2007
"... The Shannon/Nyquist sampling theorem tells us that in order to not lose information when uniformly sampling a signal we must sample at least two times faster than its bandwidth. In many applications, including digital image and video cameras, the Nyquist rate can be so high that we end up with too m ..."
Abstract
-
Cited by 696 (62 self)
- Add to MetaCart
(Show Context)
The Shannon/Nyquist sampling theorem tells us that in order to not lose information when uniformly sampling a signal we must sample at least two times faster than its bandwidth. In many applications, including digital image and video cameras, the Nyquist rate can be so high that we end up with too many samples and must compress in order to store or transmit them. In other applications, including imaging systems (medical scanners, radars) and high-speed analog-to-digital converters, increasing the sampling rate or density beyond the current state-of-the-art is very expensive. In this lecture, we will learn about a new technique that tackles these issues using compressive sensing [1, 2]. We will replace the conventional sampling and reconstruction operations with a more general linear measurement scheme coupled with an optimization in order to acquire certain kinds of signals at a rate significantly below Nyquist. 2
Image denoising using a scale mixture of Gaussians in the wavelet domain
- IEEE TRANS IMAGE PROCESSING
, 2003
"... We describe a method for removing noise from digital images, based on a statistical model of the coefficients of an overcomplete multiscale oriented basis. Neighborhoods of coefficients at adjacent positions and scales are modeled as the product of two independent random variables: a Gaussian vecto ..."
Abstract
-
Cited by 513 (17 self)
- Add to MetaCart
(Show Context)
We describe a method for removing noise from digital images, based on a statistical model of the coefficients of an overcomplete multiscale oriented basis. Neighborhoods of coefficients at adjacent positions and scales are modeled as the product of two independent random variables: a Gaussian vector and a hidden positive scalar multiplier. The latter modulates the local variance of the coefficients in the neighborhood, and is thus able to account for the empirically observed correlation between the coefficient amplitudes. Under this model, the Bayesian least squares estimate of each coefficient reduces to a weighted average of the local linear estimates over all possible values of the hidden multiplier variable. We demonstrate through simulations with images contaminated by additive white Gaussian noise that the performance of this method substantially surpasses that of previously published methods, both visually and in terms of mean squared error.
Image information and visual quality
- IEEE Trans. IP
, 2006
"... Abstract—Measurement of visual quality is of fundamental importance to numerous image and video processing applica-tions. The goal of quality assessment (QA) research is to design algorithms that can automatically assess the quality of images or videos in a perceptually consistent manner. Image QA a ..."
Abstract
-
Cited by 283 (41 self)
- Add to MetaCart
Abstract—Measurement of visual quality is of fundamental importance to numerous image and video processing applica-tions. The goal of quality assessment (QA) research is to design algorithms that can automatically assess the quality of images or videos in a perceptually consistent manner. Image QA algorithms generally interpret image quality as fidelity or similarity with a “reference ” or “perfect ” image in some perceptual space. Such “full-reference ” QA methods attempt to achieve consistency in quality prediction by modeling salient physiological and psy-chovisual features of the human visual system (HVS), or by signal fidelity measures. In this paper, we approach the image QA problem as an information fidelity problem. Specifically, we propose to quantify the loss of image information to the distortion process and explore the relationship between image information and visual quality. QA systems are invariably involved with judging the visual quality of “natural ” images and videos that are meant for “human consumption. ” Researchers have developed sophisticated models to capture the statistics of such natural sig-nals. Using these models, we previously presented an information fidelity criterion for image QA that related image quality with the amount of information shared between a reference and a distorted image. In this paper, we propose an image information measure that quantifies the information that is present in the reference image and how much of this reference information can be extracted from the distorted image. Combining these two quantities, we propose a visual information fidelity measure for image QA. We validate the performance of our algorithm with an extensive subjective study involving 779 images and show that our method outperforms recent state-of-the-art image QA algorithms by a sizeable margin in our simulations. The code and the data from the subjective study are available at the LIVE website. Index Terms—Image information, image quality assessment (QA), information fidelity, natural scene statistics (NSS). I.
Low-Complexity Image Denoising Based on Statistical Modeling of Wavelet Coefficients
, 1999
"... We introduce a simple spatially adaptive statistical model for wavelet image coe#cients and apply it to image denoising. Our model is inspired by a recent wavelet image compression algorithm, the Estimation Quantization coder. We model wavelet image coefficients as zero-mean Gaussian random varia ..."
Abstract
-
Cited by 189 (13 self)
- Add to MetaCart
(Show Context)
We introduce a simple spatially adaptive statistical model for wavelet image coe#cients and apply it to image denoising. Our model is inspired by a recent wavelet image compression algorithm, the Estimation Quantization coder. We model wavelet image coefficients as zero-mean Gaussian random variables with high local correlation. We assume a marginal prior distribution on wavelet coefficients variances and estimate them using an approximate Maximum A Posteriori Probability rule. Then we apply an approximate Minimum Mean Squared Error estimation procedure to restore the noisy wavelet image coefficients. Despite the simplicity of our method, both in its concept and implementation, our denoising results are among the best reported in the literature.
Bayesian methods for hidden Markov models: Recursive computing in the 21st century.
- Journal of the American Statistical Association,
, 2002
"... ..."
An information fidelity criterion for image quality assessment using natural scene statistics
- IEEE TRANS. IMAGE PROCESSING
, 2005
"... Measurement of visual quality is of fundamental importance to numerous image and video processing applications. The goal of quality assessment (QA) research is to design algorithms that can automatically assess the quality of images or videos in a perceptually consistent manner. Traditionally, imag ..."
Abstract
-
Cited by 117 (22 self)
- Add to MetaCart
Measurement of visual quality is of fundamental importance to numerous image and video processing applications. The goal of quality assessment (QA) research is to design algorithms that can automatically assess the quality of images or videos in a perceptually consistent manner. Traditionally, image QA algorithms interpret image quality as fidelity or similarity with a “reference” or “perfect” image in some perceptual space. Such “full-referenc” QA methods attempt to achieve consistency in quality prediction by modeling salient physiological and psychovisual features of the human visual system (HVS), or by arbitrary signal fidelity criteria. In this paper, we approach the problem of image QA by proposing a novel information fidelity criterion that is based on natural scene statistics. QA systems are invariably involved with judging the visual quality of images and videos that are meant for “human consumption. ” Researchers have developed sophisticated models to capture the statistics of natural signals, that is, pictures and videos of the visual environment. Using these statistical models in an information-theoretic setting, we derive a novel QA algorithm that provides clear advantages over the traditional approaches. In particular, it is parameterless and outperforms current methods in our testing. We validate the performance of our algorithm with an extensive subjective study involving 779 images. We also show that, although our approach distinctly departs from traditional HVS-based methods, it is functionally similar to them under certain conditions, yet it outperforms them due to improved modeling. The code and the data from the subjective study are available at [1].
Multiscale Image Segmentation using Wavelet-Domain Hidden Markov Models
- IEEE Trans. Image Processing
, 1999
"... We introduce a new image texture segmentation algorithm, HMTseg, based on wavelets and the hidden Markov tree (HMT) model. The HMT is a tree-structured probabilistic graph that captures the statistical properties of the coefficients of the wavelet transform. Since the HMT is particularly well suited ..."
Abstract
-
Cited by 108 (6 self)
- Add to MetaCart
We introduce a new image texture segmentation algorithm, HMTseg, based on wavelets and the hidden Markov tree (HMT) model. The HMT is a tree-structured probabilistic graph that captures the statistical properties of the coefficients of the wavelet transform. Since the HMT is particularly well suited to images containing singularities (edges and ridges), it provides a good classifier for distinguishing between textures. Utilizing the inherent tree structure of the wavelet HMT and its fast training and likelihood computation algorithms, we perform multiscale texture classification at a range of different scales. We then fuse these multiscale classifications using a Bayesian probabilistic graph to obtain reliable final segmentations. Since HMTseg works on the wavelet transform of the image, it can directly segment wavelet-compressed images without the need for decompression into the space domain. We demonstrate the performance of HMTseg with synthetic, aerial photo, and document image seg...
Estimating the probability of the presence of a signal of interest in multiresolution single- and multiband image denoising
- IEEE TRANSACTIONS ON IMAGE PROCESSING
, 2005
"... We develop three novel wavelet domain denoising methods for subband-adaptive, spatially-adaptive and multivalued image denoising. The core of our approach is the estimation of the probability that a given coefficient contains a significant noise-free component, which we call “signal of interest”. In ..."
Abstract
-
Cited by 91 (13 self)
- Add to MetaCart
We develop three novel wavelet domain denoising methods for subband-adaptive, spatially-adaptive and multivalued image denoising. The core of our approach is the estimation of the probability that a given coefficient contains a significant noise-free component, which we call “signal of interest”. In this respect we analyze cases where the probability of signal presence is (i) fixed per subband, (ii) conditioned on a local spatial context and (iii) conditioned on information from multiple image bands. All the probabilities are estimated assuming a generalized Laplacian prior for noise-free subband data and additive white Gaussian noise. The results demonstrate that the new subbandadaptive shrinkage function outperforms Bayesian thresholding approaches in terms of mean squared error. The spatially adaptive version of the proposed method yields better results than the existing spatially adaptive ones of similar and higher complexity. The performance on color and on multispectral images is superior with respect to recent multiband wavelet thresholding.
Directional Multiscale Modeling of Images using the Contourlet Transform
- IEEE TRANSACTIONS ON IMAGE PROCESSING
, 2004
"... The contourlet transform is a new extension to the wavelet transform in two dimensions using nonseparable and directional filter banks. The contourlet expansion is composed of basis images oriented at varying directions in multiple scales, with flexible aspect ratios. With this rich set of basis ima ..."
Abstract
-
Cited by 90 (5 self)
- Add to MetaCart
The contourlet transform is a new extension to the wavelet transform in two dimensions using nonseparable and directional filter banks. The contourlet expansion is composed of basis images oriented at varying directions in multiple scales, with flexible aspect ratios. With this rich set of basis images, the contourlet transform can effectively capture the smooth contours that are the dominant features in natural images with only a small number of coefficients. We begin with a detailed study on the statistics of the contourlet coefficients of natural images, using histogram estimates of the marginal and joint distributions, and mutual information measurements to characterize the dependencies between coefficients. The study reveals the non-Gaussian marginal statistics and strong intra-subband, cross-scale, and cross-orientation dependencies of contourlet coefficients. It is also found that conditioned on the magnitudes of their generalized neighborhood coefficients, contourlet coefficients can approximately be modeled as Gaussian variables. Based on these statistics, we model contourlet coefficients using a hidden Markov tree (HMT) model that can capture all of their inter-scale, inter-orientation, and intra-subband dependencies. We experiment this model in the image denoising and texture retrieval applications where the results are very promising. In denoising, contourlet HMT outperforms wavelet HMT and other classical methods in terms of visual quality. In particular, it preserves edges and oriented features better than other existing methods. In texture retrieval, it shows improvements in performance over wavelet methods for various oriented textures.
No-reference quality assessment using natural scene statistics: JPEG2000
- IEEE Trans. Image Process
, 2005
"... Abstract—Measurement of image or video quality is crucial for many image-processing algorithms, such as acquisition, compres-sion, restoration, enhancement, and reproduction. Traditionally, image quality assessment (QA) algorithms interpret image quality as similarity with a “reference ” or “perfect ..."
Abstract
-
Cited by 68 (10 self)
- Add to MetaCart
(Show Context)
Abstract—Measurement of image or video quality is crucial for many image-processing algorithms, such as acquisition, compres-sion, restoration, enhancement, and reproduction. Traditionally, image quality assessment (QA) algorithms interpret image quality as similarity with a “reference ” or “perfect ” image. The obvious limitation of this approach is that the reference image or video may not be available to the QA algorithm. The field of blind, or no-ref-erence, QA, in which image quality is predicted without the refer-ence image or video, has been largely unexplored, with algorithms focussing mostly on measuring the blocking artifacts. Emerging image and video compression technologies can avoid the dreaded blocking artifact by using various mechanisms, but they introduce other types of distortions, specifically blurring and ringing. In this paper, we propose to use natural scene statistics (NSS) to blindly measure the quality of images compressed by JPEG2000 (or any other wavelet based) image coder. We claim that natural scenes contain nonlinear dependencies that are disturbed by the compres-sion process, and that this disturbance can be quantified and re-lated to human perceptions of quality. We train and test our al-gorithm with data from human subjects, and show that reason-ably comprehensive NSS models can help us in making blind, but accurate, predictions of quality. Our algorithm performs close to the limit imposed on useful prediction by the variability between human subjects. Index Terms—Blind quality assessment (QA), image QA, JPEG2000, natural scene statistics (NSS), no reference (NR) image QA. I.