Results 1  10
of
101
A review of image denoising algorithms, with a new one
 Simul
, 2005
"... Abstract. The search for efficient image denoising methods is still a valid challenge at the crossing of functional analysis and statistics. In spite of the sophistication of the recently proposed methods, most algorithms have not yet attained a desirable level of applicability. All show an outstand ..."
Abstract

Cited by 510 (6 self)
 Add to MetaCart
(Show Context)
Abstract. The search for efficient image denoising methods is still a valid challenge at the crossing of functional analysis and statistics. In spite of the sophistication of the recently proposed methods, most algorithms have not yet attained a desirable level of applicability. All show an outstanding performance when the image model corresponds to the algorithm assumptions but fail in general and create artifacts or remove image fine structures. The main focus of this paper is, first, to define a general mathematical and experimental methodology to compare and classify classical image denoising algorithms and, second, to propose a nonlocal means (NLmeans) algorithm addressing the preservation of structure in a digital image. The mathematical analysis is based on the analysis of the “method noise, ” defined as the difference between a digital image and its denoised version. The NLmeans algorithm is proven to be asymptotically optimal under a generic statistical image model. The denoising performance of all considered methods are compared in four ways; mathematical: asymptotic order of magnitude of the method noise under regularity assumptions; perceptualmathematical: the algorithms artifacts and their explanation as a violation of the image model; quantitative experimental: by tables of L 2 distances of the denoised version to the original image. The most powerful evaluation method seems, however, to be the visualization of the method noise on natural images. The more this method noise looks like a real white noise, the better the method.
Sparse representation for color image restoration
 the IEEE Trans. on Image Processing
, 2007
"... Sparse representations of signals have drawn considerable interest in recent years. The assumption that natural signals, such as images, admit a sparse decomposition over a redundant dictionary leads to efficient algorithms for handling such sources of data. In particular, the design of well adapted ..."
Abstract

Cited by 217 (31 self)
 Add to MetaCart
(Show Context)
Sparse representations of signals have drawn considerable interest in recent years. The assumption that natural signals, such as images, admit a sparse decomposition over a redundant dictionary leads to efficient algorithms for handling such sources of data. In particular, the design of well adapted dictionaries for images has been a major challenge. The KSVD has been recently proposed for this task [1], and shown to perform very well for various grayscale image processing tasks. In this paper we address the problem of learning dictionaries for color images and extend the KSVDbased grayscale image denoising algorithm that appears in [2]. This work puts forward ways for handling nonhomogeneous noise and missing information, paving the way to stateoftheart results in applications such as color image denoising, demosaicing, and inpainting, as demonstrated in this paper. EDICS Category: COLCOLR (Color processing) I.
Optimal spatial adaptation for patchbased image denoising
 IEEE Trans. Image Process
, 2006
"... Abstract—A novel adaptive and patchbased approach is proposed for image denoising and representation. The method is based on a pointwise selection of small image patches of fixed size in the variable neighborhood of each pixel. Our contribution is to associate with each pixel the weighted sum of da ..."
Abstract

Cited by 114 (10 self)
 Add to MetaCart
(Show Context)
Abstract—A novel adaptive and patchbased approach is proposed for image denoising and representation. The method is based on a pointwise selection of small image patches of fixed size in the variable neighborhood of each pixel. Our contribution is to associate with each pixel the weighted sum of data points within an adaptive neighborhood, in a manner that it balances the accuracy of approximation and the stochastic error, at each spatial position. This method is general and can be applied under the assumption that there exists repetitive patterns in a local neighborhood of a point. By introducing spatial adaptivity, we extend the work earlier described by Buades et al. which can be considered as an extension of bilateral filtering to image patches. Finally, we propose a nearly parameterfree algorithm for image denoising. The method is applied to both artificially corrupted (white Gaussian noise) and real images and the performance is very close to, and in some cases even surpasses, that of the already published denoising methods. I.
Unsupervised, informationtheoretic, adaptive image filtering for image restoration
 IEEE TRANS. PAMI
, 2006
"... Image restoration is an important and widely studied problem in computer vision and image processing. Various image filtering strategies have been effective, but invariably make strong assumptions about the properties of the signal and/or degradation. Hence, these methods lack the generality to be e ..."
Abstract

Cited by 54 (3 self)
 Add to MetaCart
Image restoration is an important and widely studied problem in computer vision and image processing. Various image filtering strategies have been effective, but invariably make strong assumptions about the properties of the signal and/or degradation. Hence, these methods lack the generality to be easily applied to new applications or diverse image collections. This paper describes a novel unsupervised, informationtheoretic, adaptive filter (UINTA) that improves the predictability of pixel intensities from their neighborhoods by decreasing their joint entropy. In this way, UINTA automatically discovers the statistical properties of the signal and can thereby restore a wide spectrum of images. The paper describes the formulation to minimize the joint entropy measure and presents several important practical considerations in estimating neighborhood statistics. It presents a series of results on both real and synthetic data along with comparisons with current stateoftheart techniques, including novel applications to medical image processing.
Visual Modelling of
 Complex Business Processes with Trees, Overlays and DistortionBased Displays, Proc VLHCC’07, IEEE CS
"... evolution laws for thin crystalline films: ..."
(Show Context)
Joint sourcechannel coding error exponent for discrete communication systems with Markovian memory
 IEEE Trans. Info. Theory
, 2007
"... Abstract—We investigate the computation of Csiszár’s bounds for the joint source–channel coding (JSCC) error exponent of a communication system consisting of a discrete memoryless source and a discrete memoryless channel. We provide equivalent expressions for these bounds and derive explicit formula ..."
Abstract

Cited by 33 (11 self)
 Add to MetaCart
(Show Context)
Abstract—We investigate the computation of Csiszár’s bounds for the joint source–channel coding (JSCC) error exponent of a communication system consisting of a discrete memoryless source and a discrete memoryless channel. We provide equivalent expressions for these bounds and derive explicit formulas for the rates where the bounds are attained. These equivalent representations can be readily computed for arbitrary source–channel pairs via Arimoto’s algorithm. When the channel’s distribution satisfies a symmetry property, the bounds admit closedform parametric expressions. We then use our results to provide a systematic comparison between the JSCC error exponent and the tandem coding error exponent, which applies if the source and channel are separately coded. It is shown that 2. We establish conditions for which and for which =2. Numerical examples indicate that is close to2 for many source– channel pairs. This gain translates into a power saving larger than 2 dB for a binary source transmitted over additive white Gaussian noise (AWGN) channels and Rayleighfading channels with finite output quantization. Finally, we study the computation of the lossy JSCC error exponent under the Hamming distortion measure. Index Terms—Discrete memoryless sources and channels, error exponent, Fenchel’s duality, Hamming distortion measure, joint source–channel coding, randomcoding exponent, reliability function, spherepacking exponent, symmetric channels, tandem source and channel coding. I.
Higherorder image statistics for unsupervised, informationtheoretic, adaptive, image filtering
 Proc. IEEE Int. Conf. Computer Vision Pattern Recog. 2005. S.P. Awate and R.T
"... The restoration of images is an important and widely studied problem in computer vision and image processing. Various image filtering strategies have been effective, but invariably make strong assumptions about the properties of the signal and/or degradation. Therefore, these methods typically lack ..."
Abstract

Cited by 30 (8 self)
 Add to MetaCart
(Show Context)
The restoration of images is an important and widely studied problem in computer vision and image processing. Various image filtering strategies have been effective, but invariably make strong assumptions about the properties of the signal and/or degradation. Therefore, these methods typically lack the generality to be easily applied to new applications or diverse image collections. This paper describes a novel unsupervised, informationtheoretic, adaptive filter (UINTA) that improves the predictability of pixel intensities from their neighborhoods by decreasing the joint entropy between them. Thus UINTA automatically discovers the statistical properties of the signal and can thereby restore a wide spectrum of images and applications. This paper describes the formulation required to minimize the joint entropy measure, presents several important practical considerations in estimating imageregion statistics, and then presents results on both real and synthetic data. 1.
Examplebased regularization deployed to superresolution reconstruction of a single image. The Computer Journal
, 2007
"... In superresolution (SR) reconstruction of images, regularization becomes crucial when insufficient number of measured lowresolution images is supplied. Beyond making the problem algebraically well posed, a properly chosen regularization can direct the solution toward a better quality outcome. Even ..."
Abstract

Cited by 30 (4 self)
 Add to MetaCart
(Show Context)
In superresolution (SR) reconstruction of images, regularization becomes crucial when insufficient number of measured lowresolution images is supplied. Beyond making the problem algebraically well posed, a properly chosen regularization can direct the solution toward a better quality outcome. Even the extreme case—a SR reconstruction from a single measured image—can be made successful with a wellchosen regularization. Much of the progress made in the past two decades on inverse problems in image processing can be attributed to the advances in forming or choosing the way to practice the regularization. A Bayesian point of view interpret this as a way of including the prior distribution of images, which sheds some light on the complications involved. This paper reviews an emerging powerful family of regularization techniques that is drawing attention in recent years—the examplebased approach. We describe how examples can and have been used effectively for regularization of inverse problems, reviewing the main contributions along these lines in the literature, and organizing this information into major trends and directions. A description of the stateoftheart in this field, along with supporting simulation results on the image scaleup problem are given. This paper concludes with an outline of the outstanding challenges this field faces today.
A Tour of Modern Image Filtering  New insights and methods, both practical and theoretical
 IEEE SIGNAL PROCESSING MAGAZINE [106]
, 2013
"... Recent developments in computational imaging and restoration have heralded the arrival and convergence of several powerful methods for adaptive processing of multidimensional data. Examples include moving least square (from graphics), the bilateral filter (BF) and anisotropic diffusion (from compute ..."
Abstract

Cited by 27 (2 self)
 Add to MetaCart
Recent developments in computational imaging and restoration have heralded the arrival and convergence of several powerful methods for adaptive processing of multidimensional data. Examples include moving least square (from graphics), the bilateral filter (BF) and anisotropic diffusion (from computer vision), boosting, kernel, and spectral methods (from machine learning), nonlocal means (NLM) and its variants (from signal processing), Bregman iterations (from applied math), kernel regression, and iterative scaling (from statistics). While these approaches found their inspirations in diverse fields of nascence, they are deeply connected. Digital Object Identifier 10.1109/MSP.2011.2179329 Date of publication: 5 December 2012 In this article, I present a practical and accessible framework to understand some of the basic underpinnings of these methods, with the intention of leading the reader to a broad understanding of how they interrelate. I also illustrate connections between these techniques and more classical (empirical) Bayesian approaches. The proposed framework is used to arrive at new insights and methods, both practical and theoretical. In particular, several novel optimality properties of algorithms in wide use such as blockmatching and threedimensional (3D) filtering (BM3D), and methods for their iterative improvement (or nonexistence thereof) are discussed. A general approach is laid out to enable the performance analysis and subsequent improvement of many existing filtering algorithms. While much of the material discussed is applicable to the wider class of linear degradation models beyond noise (e.g., blur,) to keep matters focused, we consider the problem of denoising here.
Realtime pattern matching using projection kernels
 IEEE Trans. Pattern Anal. Mach. Intell
, 2005
"... Abstract—A novel approach to pattern matching is presented in which time complexity is reduced by two orders of magnitude compared to traditional approaches. The suggested approach uses an efficient projection scheme which bounds the distance between a pattern and an image window using very few oper ..."
Abstract

Cited by 26 (2 self)
 Add to MetaCart
(Show Context)
Abstract—A novel approach to pattern matching is presented in which time complexity is reduced by two orders of magnitude compared to traditional approaches. The suggested approach uses an efficient projection scheme which bounds the distance between a pattern and an image window using very few operations on average. The projection framework is combined with a rejection scheme which allows rapid rejection of image windows that are distant from the pattern. Experiments show that the approach is effective even under very noisy conditions. The approach described here can also be used in classification schemes where the projection values serve as input features that are informative and fast to extract. Index Terms—Pattern matching, template matching, pattern detection, feature extraction, WalshHadamard. 1