Results 1  10
of
19
Recovery algorithms for vector valued data with joint sparsity constraints
, 2006
"... Vector valued data appearing in concrete applications often possess sparse expansions with respect to a preassigned frame for each vector component individually. Additionally, different components may also exhibit common sparsity patterns. Recently, there were introduced sparsity measures that take ..."
Abstract

Cited by 71 (21 self)
 Add to MetaCart
Vector valued data appearing in concrete applications often possess sparse expansions with respect to a preassigned frame for each vector component individually. Additionally, different components may also exhibit common sparsity patterns. Recently, there were introduced sparsity measures that take into account such joint sparsity patterns, promoting coupling of nonvanishing components. These measures are typically constructed as weighted ℓ1 norms of componentwise ℓq norms of frame coefficients. We show how to compute solutions of linear inverse problems with such joint sparsity regularization constraints by fast thresholded Landweber algorithms. Next we discuss the adaptive choice of suitable weights appearing in the definition of sparsity measures. The weights are interpreted as indicators of the sparsity pattern and are iteratively updated after each new application of the thresholded Landweber algorithm. The resulting twostep algorithm is interpreted as a doubleminimization scheme for a suitable target functional. We show its ℓ2norm convergence. An implementable version of the algorithm is also formulated, and its norm convergence is proven. Numerical experiments in color image restoration are presented.
Accelerated Projected Gradient Method for Linear Inverse Problems with Sparsity Constraints
 THE JOURNAL OF FOURIER ANALYSIS AND APPLICATIONS
, 2004
"... Regularization of illposed linear inverse problems via ℓ1 penalization has been proposed for cases where the solution is known to be (almost) sparse. One way to obtain the minimizer of such an ℓ1 penalized functional is via an iterative softthresholding algorithm. We propose an alternative implem ..."
Abstract

Cited by 58 (10 self)
 Add to MetaCart
Regularization of illposed linear inverse problems via ℓ1 penalization has been proposed for cases where the solution is known to be (almost) sparse. One way to obtain the minimizer of such an ℓ1 penalized functional is via an iterative softthresholding algorithm. We propose an alternative implementation to ℓ1constraints, using a gradient method, with projection on ℓ1balls. The corresponding algorithm uses again iterative softthresholding, now with a variable thresholding parameter. We also propose accelerated versions of this iterative method, using ingredients of the (linear) steepest descent method. We prove convergence in norm for one of these projected gradient methods, without and with acceleration.
Restoration of chopped and nodded images by framelets
 SIAM J. Sci. Comput
"... Abstract. In infrared astronomy, an observed image from a chop and nod process can be considered as the result of passing the original image through a highpass filter. Here we propose a restoration algorithm which builds up a tight framelet system that has the highpass filter as one of the framelet ..."
Abstract

Cited by 17 (12 self)
 Add to MetaCart
Abstract. In infrared astronomy, an observed image from a chop and nod process can be considered as the result of passing the original image through a highpass filter. Here we propose a restoration algorithm which builds up a tight framelet system that has the highpass filter as one of the framelet filters. Our approach reduces the solution of restoration problem to that of recovering the missing coefficients of the original image in the tight framelet decomposition. The framelet approach provides a natural setting to apply various sophisticated framelet denoising schemes to remove the noise without reducing the intensity of major stars in the image. A proof of the convergence of the algorithm based on convex analysis is also provided. Simulated and real images are tested to illustrate the efficiency of our method over the projected Landweber method. Key words. analysis Tight frame, chopped and nodded image, projected Landweber method, convex AMS subject classifications. 42C40, 65T60, 68U10, 94A08 1. Introduction. We
Nonlinear estimation for linear inverse problems with error in the operator
 Annals of Statistics
"... We study two nonlinear methods for statistical linear inverse problems when the operator is not known. The two constructions combine Galerkin regularization and wavelet thresholding. Their performances depend on the underlying structure of the operator, quantified by an index of sparsity. We prove t ..."
Abstract

Cited by 13 (1 self)
 Add to MetaCart
We study two nonlinear methods for statistical linear inverse problems when the operator is not known. The two constructions combine Galerkin regularization and wavelet thresholding. Their performances depend on the underlying structure of the operator, quantified by an index of sparsity. We prove their rateoptimality and adaptivity properties over Besov classes. 1. Introduction. Linear inverse problems with error in the operator. We want to recover f ∈ L 2 (D), where D is a domain in R d, from data (1.1) gε = Kf + ε ˙ W,
Needlet Algorithms for Estimation in Inverse Problems
, 2007
"... We provide a new algorithm for the treatment of inverse problems which combines the traditional SVD inversion with an appropriate thresholding technique in a well chosen new basis. Our goal is to devise an inversion procedure which has the advantages of localization and multiscale analysis of wave ..."
Abstract

Cited by 8 (4 self)
 Add to MetaCart
We provide a new algorithm for the treatment of inverse problems which combines the traditional SVD inversion with an appropriate thresholding technique in a well chosen new basis. Our goal is to devise an inversion procedure which has the advantages of localization and multiscale analysis of wavelet representations without losing the stability and computability of the SVD decompositions. To this end we utilize the construction of localized frames (termed “needlets”) built upon the SVD bases. We consider two different situations: the “wavelet ” scenario, where the needlets are assumed to behave similarly to true wavelets, and the “Jacobitype” scenario, where we assume that the properties of the frame truly depend on the SVD basis at hand (hence on the operator). To illustrate each situation, we apply the estimation algorithm respectively to the deconvolution problem and to the Wicksell problem. In the latter case, where the SVD basis is a Jacobi polynomial basis, we show that our scheme is capable of achieving rates of convergence which are optimal in the L2 case, we obtain interesting rates of convergence for other Lp norms which are new (to the best of our knowledge) in the literature, and we also give a simulation study showing that the NEEDD estimator outperforms other standard algorithms in almost all situations.
Framelet based deconvolution
 J. Comp. Math
"... Abstract. In this paper, two framelet based deconvolution algorithms are proposed. The basic idea of framelet based approach is to convert the deconvolution problem to the problem of inpainting in a frame domain by constructing a framelet system with one of the masks being the given (discrete) convo ..."
Abstract

Cited by 5 (2 self)
 Add to MetaCart
Abstract. In this paper, two framelet based deconvolution algorithms are proposed. The basic idea of framelet based approach is to convert the deconvolution problem to the problem of inpainting in a frame domain by constructing a framelet system with one of the masks being the given (discrete) convolution kernel via the unitary extension principle of [26], as introduced in [6, 7, 8, 9]. The first algorithm unifies our previous works in high resolution image reconstruction and infrared chopped and nodded image restoration, and the second one is a combination of our previous framebased deconvolution algorithm and the iterative thresholding algorithm given by [14, 16]. The strong convergence of the algorithms in infinite dimensional settings is given by employing proximal forwardbackward splitting (PFBS) method. Consequently, it unifies iterative algorithms of infinite and finite dimensional setting and simplifies the proof of the convergence of the algorithms of [6]. 1. Introduction. The
Thresholding projection estimators in functional linear models
"... We consider the problem of estimating the regression function in functional linear regression models by proposing a new type of projection estimators which combine dimension reduction and thresholding. The introduction of a threshold rule allows to get consistency under broad assumptions as well as ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
We consider the problem of estimating the regression function in functional linear regression models by proposing a new type of projection estimators which combine dimension reduction and thresholding. The introduction of a threshold rule allows to get consistency under broad assumptions as well as minimax rates of convergence under additional regularity hypotheses. We also consider the particular case of Sobolev spaces generated by the trigonometric basis which permits to get easily estimators of the derivatives of the regression function and prove they are minimax. Rates of convergence are given for some particular cases.
Regularization by Fractional Filter Methods and Data Smoothing
, 2007
"... This paper is concerned with the regularization of linear illposed problems by a combination of data smoothing and fractional filter methods. For the data smoothing, a wavelet shrinkage denoising is applied to the noisy data with known error level δ. For the reconstruction, an approximation to the ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
This paper is concerned with the regularization of linear illposed problems by a combination of data smoothing and fractional filter methods. For the data smoothing, a wavelet shrinkage denoising is applied to the noisy data with known error level δ. For the reconstruction, an approximation to the solution of the operator equation is computed from the data estimate by fractional filter methods. These fractional methods are based on the classical Tikhonov and Landweber method but avoid at least partially the wellknown drawback of oversmoothing. Convergence rates as well as numerical examples are presented. 1
A General Framework for SoftShrinkage with Applications to Blind Deconvolution and Wavelet
, 2007
"... We consider the abstract problem of approximating a function ψ 0 ∈ L 1 (R d) ∩ L 2 (R d) given only noisy data ψ δ ∈ L 2 (R d). We recall that minimization of the corresponding Tikhonov functional leads to continuous softshrinkage and prove convergence results. If the noisefree data ψ 0 belongs t ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
We consider the abstract problem of approximating a function ψ 0 ∈ L 1 (R d) ∩ L 2 (R d) given only noisy data ψ δ ∈ L 2 (R d). We recall that minimization of the corresponding Tikhonov functional leads to continuous softshrinkage and prove convergence results. If the noisefree data ψ 0 belongs to the source space L 1−u (R d) ∩ L 2 (R d) for some 0 < u < 1, we show convergence rates, which are orderoptimal. We consider apriori parameter choice rules as well as the discrepancy principle, which is shown to be orderoptimal as well. We then introduce a framework by combining softshrinkage with a linear invertible isometry and show that the results obtained for the abstract minimization problem can be transferred to applications such as blind deconvolution and wavelet denoising. 1
Compressive Algorithms. Adaptive Solutions of PDEs and Variational Problems
"... Abstract. This paper is concerned with an overview of the main concepts and a few significant applications of a class of adaptive iterative algorithms which allow for dimensionality reductions when used to solve large scale problems. We call Compressive Algorithms this class of numerical methods. Th ..."
Abstract
 Add to MetaCart
Abstract. This paper is concerned with an overview of the main concepts and a few significant applications of a class of adaptive iterative algorithms which allow for dimensionality reductions when used to solve large scale problems. We call Compressive Algorithms this class of numerical methods. The introduction of this paper presents an historical excursus on the developments of the main ideas behind compressive algorithms and stresses the common features of diverse applications. The first part of the paper is addressed to the optimal performances of such algorithms when compared with known benchmarks in the numerical solution of elliptic partial differential equations. In the second part we address the solution of inverse problems both with sparsity and compressibility constraints. We stress how compressive algorithms can stem from variational principles. We illustrate the main results and applications by a few significant numerical examples. We conclude by pointing out future developments. 1