Results 11  20
of
366
Image Deblurring with Blurred/Noisy Image Pairs
"... Taking satisfactory photos under dim lighting conditions using a handheld camera is challenging. If the camera is set to a long exposure time, the image is blurred due to camera shake. On the other hand, the image is dark and noisy if it is taken with a short exposure time but with a high camera g ..."
Abstract

Cited by 129 (4 self)
 Add to MetaCart
Taking satisfactory photos under dim lighting conditions using a handheld camera is challenging. If the camera is set to a long exposure time, the image is blurred due to camera shake. On the other hand, the image is dark and noisy if it is taken with a short exposure time but with a high camera gain. By combining information extracted from both blurred and noisy images, however, we show in this paper how to produce a high quality image that cannot be obtained by simply denoising the noisy image, or deblurring the blurred image alone. Our approach is image deblurring with the help of the noisy image. First, both images are used to estimate an accurate blur kernel, which otherwise is difficult to obtain from a single blurred image. Second, and again using both images, a residual deconvolution is proposed to significantly reduce ringing artifacts inherent to image deconvolution. Third, the remaining ringing artifacts in smooth image regions are further suppressed by a gaincontrolled deconvolution process. We demonstrate the effectiveness of our approach using a number of indoor and outdoor images taken by offtheshelf handheld cameras in poor lighting environments.
Dense Estimation and ObjectBased Segmentation of the Optical Flow with Robust Techniques
, 1998
"... In this paper we address the issue of recovering and segmenting the apparent velocity field in sequences of images. As for motion estimation, we minimize an objective function involving two robust terms. The first one cautiously captures the optical flow constraint, while the second (a priori) term ..."
Abstract

Cited by 114 (20 self)
 Add to MetaCart
In this paper we address the issue of recovering and segmenting the apparent velocity field in sequences of images. As for motion estimation, we minimize an objective function involving two robust terms. The first one cautiously captures the optical flow constraint, while the second (a priori) term incorporates a discontinuitypreserving smoothness constraint. To cope with the nonconvex minimization problem thus defined, we design an efficient deterministic multigrid procedure. It converges fast toward estimates of good quality, while revealing the large discontinuity structures of flow fields. We then propose an extension of the model by attaching to it a flexible objectbased segmentation device based on deformable closed curves (different families of curve equipped with different kinds of prior can be easily supported). Experimental results on synthetic and natural sequences are presented, including an analysis of sensitivity to parameter tuning. INdex Terms Closed segmenting cu...
Fast image deconvolution using hyperlaplacian priors, supplementary material
, 2009
"... The heavytailed distribution of gradients in natural scenes have proven effective priors for a range of problems such as denoising, deblurring and superresolution. These distributions are well modeled by a hyperLaplacian p(x) ∝ e−kxα), typically with 0.5 ≤ α ≤ 0.8. However, the use of sparse ..."
Abstract

Cited by 106 (1 self)
 Add to MetaCart
(Show Context)
The heavytailed distribution of gradients in natural scenes have proven effective priors for a range of problems such as denoising, deblurring and superresolution. These distributions are well modeled by a hyperLaplacian p(x) ∝ e−kxα), typically with 0.5 ≤ α ≤ 0.8. However, the use of sparse distributions makes the problem nonconvex and impractically slow to solve for multimegapixel images. In this paper we describe a deconvolution approach that is several orders of magnitude faster than existing techniques that use hyperLaplacian priors. We adopt an alternating minimization scheme where one of the two phases is a nonconvex problem that is separable over pixels. This perpixel subproblem may be solved with a lookup table (LUT). Alternatively, for two specific values of α, 1/2 and 2/3 an analytic solution can be found, by finding the roots of a cubic and quartic polynomial, respectively. Our approach (using either LUTs or analytic formulae) is able to deconvolve a 1 megapixel image in less than ∼3 seconds, achieving comparable quality to existing methods such as iteratively reweighted least squares (IRLS) that take ∼20 minutes. Furthermore, our method is quite general and can easily be extended to related image processing problems, beyond the deconvolution application demonstrated. 1
MINIMIZERS OF COSTFUNCTIONS INVOLVING NONSMOOTH DATAFIDELITY TERMS. APPLICATION TO THE PROCESSING OF OUTLIERS
, 2002
"... We present a theoretical study of the recovery of an unknown vector x ∈ Rp (such as a signal or an image) from noisy data y ∈ Rq by minimizing with respect to x a regularized costfunction F(x, y) = Ψ(x, y) + αΦ(x), where Ψ is a datafidelity term, Φ is a smooth regularization term, and α> 0 i ..."
Abstract

Cited by 103 (19 self)
 Add to MetaCart
We present a theoretical study of the recovery of an unknown vector x ∈ Rp (such as a signal or an image) from noisy data y ∈ Rq by minimizing with respect to x a regularized costfunction F(x, y) = Ψ(x, y) + αΦ(x), where Ψ is a datafidelity term, Φ is a smooth regularization term, and α> 0 is a parameter. Typically, Ψ(x, y) = ‖Ax − y‖2, where A is a linear operator. The datafidelity terms Ψ involved in regularized costfunctions are generally smooth functions; only a few papers make an exception to this and they consider restricted situations. Nonsmooth datafidelity terms are avoided in image processing. In spite of this, we consider both smooth and nonsmooth datafidelity terms. Our goal is to capture essential features exhibited by the local minimizers of regularized costfunctions in relation to the smoothness of the datafidelity term. In order to fix the context of our study, we consider Ψ(x, y) = i ψ(aTi x − yi), where aTi are the rows of A and ψ is Cm on R \ {0}. We show that if ψ′(0−) < ψ′(0+), then typical data y give rise to local minimizers x ̂ of F(., y) which fit exactly a certain number of the data entries: there is a possibly large set h ̂ of indexes such that aTi x ̂ = yi for every i ∈ ĥ. In contrast, if ψ is
A Review of Nonlinear Diffusion Filtering
, 1997
"... . This paper gives an overview of scalespace and image enhancement techniques which are based on parabolic partial differential equations in divergence form. In the nonlinear setting this filter class allows to integrate apriori knowledge into the evolution. We sketch basic ideas behind the differ ..."
Abstract

Cited by 100 (10 self)
 Add to MetaCart
. This paper gives an overview of scalespace and image enhancement techniques which are based on parabolic partial differential equations in divergence form. In the nonlinear setting this filter class allows to integrate apriori knowledge into the evolution. We sketch basic ideas behind the different filter models, discuss their theoretical foundations and scalespace properties, discrete aspects, suitable algorithms, generalizations, and applications. 1 Introduction During the last decade nonlinear diffusion filters have become a powerful and wellfounded tool in multiscale image analysis. These models allow to include apriori knowledge into the scalespace evolution, and they lead to an image simplification which simultaneously preserves or even enhances semantically important information such as edges, lines, or flowlike structures. Many papers have appeared proposing different models, investigating their theoretical foundations, and describing interesting applications. For a n...
Hermeneutics: Interpretation Theory
 in Schleiermacher, Dilthey, Heidegger and Gadamer, Northwestern University Studies in Phenomenology & Existential Philosophy
, 1969
"... Report on proposed doctoral thesis: ..."
(Show Context)
Dense Disparity Map Estimation Respecting Image Discontinuities: A PDE and ScaleSpace Based Approach
 JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION
, 2000
"... We present an energy based approach to estimate a dense disparity map between two images while preserving its discontinuities resulting from image boundaries. We first derive a simplied expression for the disparity that allows us to easily estimate it from a stereo pair of images using an energy min ..."
Abstract

Cited by 85 (11 self)
 Add to MetaCart
We present an energy based approach to estimate a dense disparity map between two images while preserving its discontinuities resulting from image boundaries. We first derive a simplied expression for the disparity that allows us to easily estimate it from a stereo pair of images using an energy minimization approach. We assume that the epipolar geometry is known, and we include this information in the energy model. Discontinuities are preserved by means of a regularization term based on the NagelEnkelmann operator. We investigate the associated EulerLagrange equation of the energy functional, and we approach the solution of the underlying partial differential equation (PDE) using a gradient descent method. In order to reduce the risk to be trapped within some irrelevant local minima during the iterations, we use a focusing strategy based on a linear scalespace. We prove the existence and uniqueness of the underlying parabolic partial differential equation. Experimental results on bot...
Recovering Edges in IllPosed Inverse Problems: Optimality of Curvelet Frames
, 2000
"... We consider a model problem of recovering a function f(x1,x2) from noisy Radon data. The function f to be recovered is assumed smooth apart from a discontinuity along a C2 curve – i.e. an edge. We use the continuum white noise model, with noise level ɛ. Traditional linear methods for solving such in ..."
Abstract

Cited by 78 (14 self)
 Add to MetaCart
(Show Context)
We consider a model problem of recovering a function f(x1,x2) from noisy Radon data. The function f to be recovered is assumed smooth apart from a discontinuity along a C2 curve – i.e. an edge. We use the continuum white noise model, with noise level ɛ. Traditional linear methods for solving such inverse problems behave poorly in the presence of edges. Qualitatively, the reconstructions are blurred near the edges; quantitatively, they give in our model Mean Squared Errors (MSEs) that tend to zero with noise level ɛ only as O(ɛ1/2)asɛ → 0. A recent innovation – nonlinear shrinkage in the wavelet domain – visually improves edge sharpness and improves MSE convergence to O(ɛ2/3). However, as we show here, this rate is not optimal. In fact, essentially optimal performance is obtained by deploying the recentlyintroduced tight frames of curvelets in this setting. Curvelets are smooth, highly anisotropic elements ideally suited for detecting and synthesizing curved edges. To deploy them in the Radon setting, we construct a curveletbased biorthogonal decomposition
ConjugateGradient Preconditioning Methods for ShiftVariant PET Image Reconstruction
 IEEE Tr. Im. Proc
, 2002
"... Gradientbased iterative methods often converge slowly for tomographic image reconstruction and image restoration problems, but can be accelerated by suitable preconditioners. Diagonal preconditioners offer some improvement in convergence rate, but do not incorporate the structure of the Hessian mat ..."
Abstract

Cited by 76 (31 self)
 Add to MetaCart
Gradientbased iterative methods often converge slowly for tomographic image reconstruction and image restoration problems, but can be accelerated by suitable preconditioners. Diagonal preconditioners offer some improvement in convergence rate, but do not incorporate the structure of the Hessian matrices in imaging problems. Circulant preconditioners can provide remarkable acceleration for inverse problems that are approximately shiftinvariant, i.e. for those with approximately blockToeplitz or blockcirculant Hessians. However, in applications with nonuniform noise variance, such as arises from Poisson statistics in emission tomography and in quantumlimited optical imaging, the Hessian of the weighted leastsquares objective function is quite shiftvariant, and circulant preconditioners perform poorly. Additional shiftvariance is caused by edgepreserving regularization methods based on nonquadratic penalty functions. This paper describes new preconditioners that approximate more accurately the Hessian matrices of shiftvariant imaging problems. Compared to diagonal or circulant preconditioning, the new preconditioners lead to significantly faster convergence rates for the unconstrained conjugategradient (CG) iteration. We also propose a new efficient method for the linesearch step required by CG methods. Applications to positron emission tomography (PET) illustrate the method.
A Bayesian Approach to Introducing AnatomoFunctional Priors in the EEG/MEG Inverse Problem
, 1997
"... In this paper, we present a new approach to the recovering of dipole magnitudes in a distributed source model for magnetoencephalographic (MEG) and electroencephalographic (EEG) imaging. This method consists in introducing spatial and temporal a priori information as a cure to this illposed inverse ..."
Abstract

Cited by 74 (2 self)
 Add to MetaCart
In this paper, we present a new approach to the recovering of dipole magnitudes in a distributed source model for magnetoencephalographic (MEG) and electroencephalographic (EEG) imaging. This method consists in introducing spatial and temporal a priori information as a cure to this illposed inverse problem. A nonlinear spatial regularization scheme allows the preservation of dipole moment discontinuities between some a priori noncorrelated sources, for instance, when considering dipoles located on both sides of a sulcus. Moreover, we introduce temporal smoothness constraints on dipole magnitude evolution at time scales smaller than those of cognitive processes. These priors are easily integrated into a Bayesian formalism, yielding a maximum a posteriori (MAP) estimator of brain electrical activity. Results from EEG simulations of our method are presented and compared with those of classical quadratic regularization and a now popular generalized minimumnorm technique called lowresolution electromagnetic tomography (LORETA).