Results 1  10
of
213
Deterministic edgepreserving regularization in computed imaging
 IEEE Trans. Image Processing
, 1997
"... Abstract—Many image processing problems are ill posed and must be regularized. Usually, a roughness penalty is imposed on the solution. The difficulty is to avoid the smoothing of edges, which are very important attributes of the image. In this paper, we first give conditions for the design of such ..."
Abstract

Cited by 301 (27 self)
 Add to MetaCart
(Show Context)
Abstract—Many image processing problems are ill posed and must be regularized. Usually, a roughness penalty is imposed on the solution. The difficulty is to avoid the smoothing of edges, which are very important attributes of the image. In this paper, we first give conditions for the design of such an edgepreserving regularization. Under these conditions, we show that it is possible to introduce an auxiliary variable whose role is twofold. First, it marks the discontinuities and ensures their preservation from smoothing. Second, it makes the criterion halfquadratic. The optimization is then easier. We propose a deterministic strategy, based on alternate minimizations on the image and the auxiliary variable. This leads to the definition of an original reconstruction algorithm, called ARTUR. Some theoretical properties of ARTUR are discussed. Experimental results illustrate the behavior of the algorithm. These results are shown in the field of tomography, but this method can be applied in a large number of applications in image processing. I.
A new alternating minimization algorithm for total variation image reconstruction
 SIAM J. IMAGING SCI
, 2008
"... We propose, analyze and test an alternating minimization algorithm for recovering images from blurry and noisy observations with total variation (TV) regularization. This algorithm arises from a new halfquadratic model applicable to not only the anisotropic but also isotropic forms of total variati ..."
Abstract

Cited by 211 (24 self)
 Add to MetaCart
(Show Context)
We propose, analyze and test an alternating minimization algorithm for recovering images from blurry and noisy observations with total variation (TV) regularization. This algorithm arises from a new halfquadratic model applicable to not only the anisotropic but also isotropic forms of total variation discretizations. The periteration computational complexity of the algorithm is three Fast Fourier Transforms (FFTs). We establish strong convergence properties for the algorithm including finite convergence for some variables and relatively fast exponential (or qlinear in optimization terminology) convergence for the others. Furthermore, we propose a continuation scheme to accelerate the practical convergence of the algorithm. Extensive numerical results show that our algorithm performs favorably in comparison to several stateoftheart algorithms. In particular, it runs orders of magnitude faster than the Lagged Diffusivity algorithm for totalvariationbased deblurring. Some extensions of our algorithm are also discussed.
Robust Solutions To LeastSquares Problems With Uncertain Data
, 1997
"... . We consider leastsquares problems where the coefficient matrices A; b are unknownbutbounded. We minimize the worstcase residual error using (convex) secondorder cone programming, yielding an algorithm with complexity similar to one singular value decomposition of A. The method can be interpret ..."
Abstract

Cited by 189 (14 self)
 Add to MetaCart
. We consider leastsquares problems where the coefficient matrices A; b are unknownbutbounded. We minimize the worstcase residual error using (convex) secondorder cone programming, yielding an algorithm with complexity similar to one singular value decomposition of A. The method can be interpreted as a Tikhonov regularization procedure, with the advantage that it provides an exact bound on the robustness of solution, and a rigorous way to compute the regularization parameter. When the perturbation has a known (e.g., Toeplitz) structure, the same problem can be solved in polynomialtime using semidefinite programming (SDP). We also consider the case when A; b are rational functions of an unknownbutbounded perturbation vector. We show how to minimize (via SDP) upper bounds on the optimal worstcase residual. We provide numerical examples, including one from robust identification and one from robust interpolation. Key Words. Leastsquares, uncertainty, robustness, secondorder cone...
MINIMIZERS OF COSTFUNCTIONS INVOLVING NONSMOOTH DATAFIDELITY TERMS. APPLICATION TO THE PROCESSING OF OUTLIERS
, 2002
"... We present a theoretical study of the recovery of an unknown vector x ∈ Rp (such as a signal or an image) from noisy data y ∈ Rq by minimizing with respect to x a regularized costfunction F(x, y) = Ψ(x, y) + αΦ(x), where Ψ is a datafidelity term, Φ is a smooth regularization term, and α> 0 i ..."
Abstract

Cited by 102 (19 self)
 Add to MetaCart
We present a theoretical study of the recovery of an unknown vector x ∈ Rp (such as a signal or an image) from noisy data y ∈ Rq by minimizing with respect to x a regularized costfunction F(x, y) = Ψ(x, y) + αΦ(x), where Ψ is a datafidelity term, Φ is a smooth regularization term, and α> 0 is a parameter. Typically, Ψ(x, y) = ‖Ax − y‖2, where A is a linear operator. The datafidelity terms Ψ involved in regularized costfunctions are generally smooth functions; only a few papers make an exception to this and they consider restricted situations. Nonsmooth datafidelity terms are avoided in image processing. In spite of this, we consider both smooth and nonsmooth datafidelity terms. Our goal is to capture essential features exhibited by the local minimizers of regularized costfunctions in relation to the smoothness of the datafidelity term. In order to fix the context of our study, we consider Ψ(x, y) = i ψ(aTi x − yi), where aTi are the rows of A and ψ is Cm on R \ {0}. We show that if ψ′(0−) < ψ′(0+), then typical data y give rise to local minimizers x ̂ of F(., y) which fit exactly a certain number of the data entries: there is a possibly large set h ̂ of indexes such that aTi x ̂ = yi for every i ∈ ĥ. In contrast, if ψ is
Bayesian and Regularization Methods for Hyperparameter Estimation in Image Restoration
 IEEE Trans. Image Processing
, 1999
"... In this paper, we propose the application of the hierarchical Bayesian paradigm to the image restoration problem. We derive expressions for the iterative evaluation of the two hyperparameters applying the evidence and maximum a posteriori (MAP) analysis within the hierarchical Bayesian paradigm. We ..."
Abstract

Cited by 76 (27 self)
 Add to MetaCart
In this paper, we propose the application of the hierarchical Bayesian paradigm to the image restoration problem. We derive expressions for the iterative evaluation of the two hyperparameters applying the evidence and maximum a posteriori (MAP) analysis within the hierarchical Bayesian paradigm. We show analytically that the analysis provided by the evidence approach is more realistic and appropriate than the MAP approach for the image restoration problem. We furthermore study the relationship between the evidence and an iterative approach resulting from the set theoretic regularization approach for estimating the two hyperparameters, or their ratio, defined as the regularization parameter. Finally the proposed algorithms are tested experimentally.
A Bayesian Approach to Introducing AnatomoFunctional Priors in the EEG/MEG Inverse Problem
, 1997
"... In this paper, we present a new approach to the recovering of dipole magnitudes in a distributed source model for magnetoencephalographic (MEG) and electroencephalographic (EEG) imaging. This method consists in introducing spatial and temporal a priori information as a cure to this illposed inverse ..."
Abstract

Cited by 71 (2 self)
 Add to MetaCart
In this paper, we present a new approach to the recovering of dipole magnitudes in a distributed source model for magnetoencephalographic (MEG) and electroencephalographic (EEG) imaging. This method consists in introducing spatial and temporal a priori information as a cure to this illposed inverse problem. A nonlinear spatial regularization scheme allows the preservation of dipole moment discontinuities between some a priori noncorrelated sources, for instance, when considering dipoles located on both sides of a sulcus. Moreover, we introduce temporal smoothness constraints on dipole magnitude evolution at time scales smaller than those of cognitive processes. These priors are easily integrated into a Bayesian formalism, yielding a maximum a posteriori (MAP) estimator of brain electrical activity. Results from EEG simulations of our method are presented and compared with those of classical quadratic regularization and a now popular generalized minimumnorm technique called lowresolution electromagnetic tomography (LORETA).
Image restoration subject to a total variation constraint
 IEEE Transactions on Image Processing
, 2004
"... Abstract—Total variation has proven to be a valuable concept in connection with the recovery of images featuring piecewise smooth components. So far, however, it has been used exclusively as an objective to be minimized under constraints. In this paper, we propose an alternative formulation in which ..."
Abstract

Cited by 52 (6 self)
 Add to MetaCart
(Show Context)
Abstract—Total variation has proven to be a valuable concept in connection with the recovery of images featuring piecewise smooth components. So far, however, it has been used exclusively as an objective to be minimized under constraints. In this paper, we propose an alternative formulation in which total variation is used as a constraint in a general convex programming framework. This approach places no limitation on the incorporation of additional constraints in the restoration process and the resulting optimization problem can be solved efficiently via blockiterative methods. Image denoising and deconvolution applications are demonstrated. I. PROBLEM STATEMENT THE CLASSICAL linear restoration problem is to find the original form of an image in a real Hilbert space from the observation of a degraded image where
Analysis of HalfQuadratic Minimization Methods for Signal and Image Recovery
, 2003
"... Abstract. We address the minimization of regularized convex cost functions which are customarily used for edgepreserving restoration and reconstruction of signals and images. In order to accelerate computation, the multiplicative and the additive halfquadratic reformulation of the original cost ..."
Abstract

Cited by 47 (8 self)
 Add to MetaCart
(Show Context)
Abstract. We address the minimization of regularized convex cost functions which are customarily used for edgepreserving restoration and reconstruction of signals and images. In order to accelerate computation, the multiplicative and the additive halfquadratic reformulation of the original costfunction have been pioneered in Geman and Reynolds [IEEE Trans. Pattern Anal. Machine Intelligence, 14 (1992), pp. 367–383] and Geman and Yang [IEEE Trans. Image Process., 4 (1995), pp. 932–946]. The alternate minimization of the resultant (augmented) costfunctions has a simple explicit form. The goal of this paper is to provide a systematic analysis of the convergence rate achieved by these methods. For the multiplicative and additive halfquadratic regularizations, we determine their upper bounds for their rootconvergence factors. The bound for the multiplicative form is seen to be always smaller than the bound for the additive form. Experiments show that the number of iterations required for convergence for the multiplicative form is always less than that for the additive form. However, the computational cost of each iteration is much higher for the multiplicative form than for the additive form. The global assessment is that minimization using the additive form of halfquadratic regularization is faster than using the multiplicative form. When the additive form is applicable, it is hence recommended. Extensive experiments demonstrate that in our MATLAB implementation, both methods are substantially faster (in terms of computational times) than the standard MATLAB Optimization Toolbox routines used in our comparison study.
Weakly constrained minimization. Application to the estimation of images and signals involving constant regions
, 2004
"... Abstract. We focus on the question of how the shape of a costfunction determines the features manifested by its local (and hence global) minimizers. Our goal is to check the possibility that the local minimizers of an unconstrained costfunction satisfy different subsets of affine constraints depen ..."
Abstract

Cited by 41 (13 self)
 Add to MetaCart
(Show Context)
Abstract. We focus on the question of how the shape of a costfunction determines the features manifested by its local (and hence global) minimizers. Our goal is to check the possibility that the local minimizers of an unconstrained costfunction satisfy different subsets of affine constraints dependent on the data, hence the word “weak”. A typical example is the estimation of images and signals which are constant on some regions. We provide general conditions on costfunctions which ensure that their minimizers can satisfy weak constraints when noisy data range over an open subset. These costfunctions are nonsmooth at all points satisfying the weak constraints. In contrast, the local minimizers of smooth costfunctions can almost never satisfy weak constraints. These results, obtained in a general setting, are applied to analyze the minimizers of costfunctions, composed of a datafidelity term and a regularization term. We thus consider the effect produced by nonsmooth regularization, in comparison with smooth regularization. In particular, these results explain the staircasing effect, well known in totalvariation methods. Theoretical results are illustrated using analytical examples and numerical experiments.