Results 1 
8 of
8
Color Constancy with SpatioSpectral Statistics
"... c○2011 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to ..."
Abstract

Cited by 12 (1 self)
 Add to MetaCart
(Show Context)
c○2011 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to
Translationinvariant shrinkage/thresholding of group sparse signals
 Signal Processing, 94:476–489
, 2014
"... This paper addresses signal denoising when largeamplitude coefficients form clusters (groups). The L1norm and other separable sparsity models do not capture the tendency of coefficients to cluster (group sparsity). This work develops an algorithm, called ‘overlapping group shrinkage ’ (OGS), based ..."
Abstract

Cited by 6 (3 self)
 Add to MetaCart
This paper addresses signal denoising when largeamplitude coefficients form clusters (groups). The L1norm and other separable sparsity models do not capture the tendency of coefficients to cluster (group sparsity). This work develops an algorithm, called ‘overlapping group shrinkage ’ (OGS), based on the minimization of a convex cost function involving a groupsparsity promoting penalty function. The groups are fully overlapping so the denoising method is translationinvariant and blocking artifacts are avoided. Based on the principle of majorizationminimization (MM), we derive a simple iterative minimization algorithm that reduces the cost function monotonically. A procedure for setting the regularization parameter, based on attenuating the noise to a specified level, is also described. The proposed approach is illustrated on speech enhancement, wherein the OGS approach is applied in the shorttime Fourier transform (STFT) domain. The OGS algorithm produces denoised speech that is relatively free of musical noise. 1
GroupSparse Signal Denoising: NonConvex Regularization, Convex Optimization
, 2014
"... Abstract—Convex optimization with sparsitypromoting convex regularization is a standard approach for estimating sparse signals in noise. In order to promote sparsity more strongly than convex regularization, it is also standard practice to employ nonconvex optimization. In this paper, we take a t ..."
Abstract

Cited by 5 (3 self)
 Add to MetaCart
(Show Context)
Abstract—Convex optimization with sparsitypromoting convex regularization is a standard approach for estimating sparse signals in noise. In order to promote sparsity more strongly than convex regularization, it is also standard practice to employ nonconvex optimization. In this paper, we take a third approach. We utilize a nonconvex regularization term chosen such that the total cost function (consisting of data consistency and regularization terms) is convex. Therefore, sparsity is more strongly promoted than in the standard convex formulation, but without sacrificing the attractive aspects of convex optimization (unique minimum, robust algorithms, etc.). We use this idea to improve the recently developed ‘overlapping group shrinkage ’ (OGS) algorithm for the denoising of groupsparse signals. The algorithm is applied to the problem of speech enhancement with favorable results in terms of both SNR and perceptual quality. Index Terms—group sparse model; convex optimization; nonconvex optimization; sparse optimization; translationinvariant denoising; denoising; speech enhancement I.
DENOISING SPARSE NOISE VIA ONLINE DICTIONARY LEARNING
"... The idea of learning overcomplete dictionaries based on the paradigm of compressive sensing has found numerous applications, among which image denoising is considered one of the most successful. But many stateoftheart denoising techniques inherently assume that the signal noise is Gaussian. We in ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
(Show Context)
The idea of learning overcomplete dictionaries based on the paradigm of compressive sensing has found numerous applications, among which image denoising is considered one of the most successful. But many stateoftheart denoising techniques inherently assume that the signal noise is Gaussian. We instead propose to learn overcomplete dictionaries where the signal is allowed to have both Gaussian and (sparse) Laplacian noise. Dictionary learning in this setting leads to a difficult nonconvex optimization problem, which is further exacerbated by large input datasets. We tackle these difficulties by developing an efficient online algorithm that scales to data size. To assess the efficacy of our model, we apply it to dictionary learning for data that naturally satisfy our noise model, namely, Scale Invariant Feature Transform (SIFT) descriptors. For these data, we measure performance of the learned dictionary on the task of nearestneighbor retrieval: compared to methods that do not explicitly model sparse noise our method exhibits superior performance. Index Terms — denoising, sparsity, dictionary learning 1.
Overlapping Group Shrinkage/Thresholding and Denoising
 Institute of New York University
, 2012
"... Abstract—This paper addresses signal denoising when largeamplitude coefficients form clusters (groups). The L1norm and other separable sparsity models do not capture the tendency of coefficients to cluster (group sparsity). This work describes an approach, ‘overlapping group shrinkage ’ (OGS), bas ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
(Show Context)
Abstract—This paper addresses signal denoising when largeamplitude coefficients form clusters (groups). The L1norm and other separable sparsity models do not capture the tendency of coefficients to cluster (group sparsity). This work describes an approach, ‘overlapping group shrinkage ’ (OGS), based on the minimization of a convex cost function incorporating a mixed norm. The groups are fully overlapping so that the denoising method is shiftinvariant and blocking artifacts are avoided. A simple minimization algorithm, based on successive substitution, is derived and proven to reduce the cost function monotonically. A procedure for setting the regularization parameter, based on attenuating the noise to a specified level, is also described. The proposed approach is illustrated on speech enhancement, wherein the OGS approach is applied in the shorttime Fourier transform (STFT) domain. The denoised speech produced by OGS is free of musical noise. Index Terms—group sparsity, structured sparsity, denoising, speech enhancement, musical noise, threesigma rule. I.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 1 Color Constancy with SpatioSp
"... Abstract—We introduce an efficient maximum likelihood approach for one part of the color constancy problem: removing from an image the color cast caused by the spectral distribution of the dominating scene illuminant. We do this by developing a statistical model for the spatial distribution of color ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract—We introduce an efficient maximum likelihood approach for one part of the color constancy problem: removing from an image the color cast caused by the spectral distribution of the dominating scene illuminant. We do this by developing a statistical model for the spatial distribution of colors in white balanced images (i.e. those that have no color cast), and then using this model to infer illumination parameters as those being most likely under our model. The key observation is that by applying spatial bandpass filters to color images one unveils color distributions that are unimodal, symmetric, and wellrepresented by a simple parametric form. Once these distributions are fit to training data, they enable efficient maximum likelihood estimation of the dominant illuminant in a new image, and they can be combined with statistical prior information about the illuminant in a very natural manner. Experimental evaluation on standard datasets suggests that the approach performs well.
ON THE MAP ESTIMATION IN THE CONTEXT OF ELLIPTICAL DISTRIBUTIONS
"... The purpose of this paper is to study the estimation problem of a multivariate elliptically symmetric random variable corrupted by a multivariate elliptically symmetric noise. In this study, the maximum a posteriori (MAP) approach is presented, extending recent works by Alecu et al. [1] and Selesnic ..."
Abstract
 Add to MetaCart
(Show Context)
The purpose of this paper is to study the estimation problem of a multivariate elliptically symmetric random variable corrupted by a multivariate elliptically symmetric noise. In this study, the maximum a posteriori (MAP) approach is presented, extending recent works by Alecu et al. [1] and Selesnick [2, 3]: (i) the estimation is performed in a multivariate context, (ii) the corrupting noise is not limited to be Gaussian. This paper also extends our previous work that dealt with the minimum mean square error (MMSE) approach [4]. The MMSE is briefly recalled and the MAP is derived. Then the practical use of the MAP in a general setting is discussed and compared to that of the MMSE and of the Wiener estimator. Several examples illustrate the behaviors of these estimators and exhibit their performances. 1.
Robust Dictionary Learning by Error Source Decomposition
"... Sparsity models have recently shown great promise in many vision tasks. Using a learned dictionary in sparsity models can in general outperform predefined bases in clean data. In practice, both training and testing data may be corrupted and contain noises and outliers. Although recent studies attemp ..."
Abstract
 Add to MetaCart
(Show Context)
Sparsity models have recently shown great promise in many vision tasks. Using a learned dictionary in sparsity models can in general outperform predefined bases in clean data. In practice, both training and testing data may be corrupted and contain noises and outliers. Although recent studies attempted to cope with corrupted data and achieved encouraging results in testing phase, how to handle corruption in training phase still remains a very difficult problem. In contrast to most existing methods that learn the dictionary from clean data, this paper is targeted at handling corruptions and outliers in training data for dictionary learning. We propose a general method to decompose the reconstructive residual into two components: a nonsparse component for small universal noises and a sparse component for large outliers, respectively. In addition, further analysis reveals the connection between our approach and the “partial ” dictionary learning approach, updating only part of the prototypes (or informative codewords) with remaining (or noisy codewords) fixed. Experiments on synthetic data as well as real applications have shown satisfactory performance of this new robust dictionary learning approach. 1.