• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Advanced Search Include Citations
Advanced Search Include Citations

Group-sparse signal denoising: Nonconvex regularization, convex optimization,” Signal Processing, (2014)

by P-Y Chen, I W Selesnick
Venue:IEEE Transactions on,
Add To MetaCart

Tools

Sorted by:
Results 1 - 6 of 6

Sparse Signal Estimation by Maximally Sparse Convex Optimization

by Ivan W. Selesnick, Ilker Bayram - IEEE TRANSACTIONS ON SIGNAL PROCESSING , 2014
"... This paper addresses the problem of sparsity penalized least squares for applications in sparse signal processing, e.g. sparse deconvolution. This paper aims to induce sparsity more strongly than L1 norm regularization, while avoiding non-convex optimization. For this purpose, this paper describes ..."
Abstract - Cited by 8 (4 self) - Add to MetaCart
This paper addresses the problem of sparsity penalized least squares for applications in sparse signal processing, e.g. sparse deconvolution. This paper aims to induce sparsity more strongly than L1 norm regularization, while avoiding non-convex optimization. For this purpose, this paper describes the design and use of non-convex penalty functions (regularizers) constrained so as to ensure the convexity of the total cost function, F, to be minimized. The method is based on parametric penalty functions, the parameters of which are constrained to ensure convexity of F. It is shown that optimal parameters can be obtained by semidefinite programming (SDP). This maximally sparse convex (MSC) approach yields maximally non-convex sparsity-inducing penalty functions constrained such that the total cost function, F, is convex. It is demonstrated that iterative MSC (IMSC) can yield solutions substantially more sparse than the standard convex sparsity-inducing approach, i.e., L1 norm minimization.

Convex 1-D total variation denoising with non-convex regularization

by Ivan W. Selesnick, Ankit Parekh - IEEE Signal Processing Letters , 2015
"... Abstract—Total variation (TV) denoising is an effective noise suppression method when the derivative of the underlying signal is known to be sparse. TV denoising is defined in terms of a convex optimization problem involving a quadratic data fidelity term and a convex regularization term. A non-conv ..."
Abstract - Cited by 2 (1 self) - Add to MetaCart
Abstract—Total variation (TV) denoising is an effective noise suppression method when the derivative of the underlying signal is known to be sparse. TV denoising is defined in terms of a convex optimization problem involving a quadratic data fidelity term and a convex regularization term. A non-convex regularizer can promote sparsity more strongly, but generally leads to a non-convex optimization problem with non-optimal local min-ima. This letter proposes the use of a non-convex regularizer constrained so that the total objective function to be minimized maintains its convexity. Conditions for a non-convex regularizer are given that ensure the total TV denoising objective function is convex. An efficient algorithm is given for the resulting problem. I.
(Show Context)

Citation Context

...4], [17]. This approach has recently been considered in [22] where the convexity condition is cast as a semidefinite program (SDP), in [1] which considers a nonconvex extension of fused-lasso, and in =-=[3]-=- which addresses translation-invariant group-sparse denoising. II. PROBLEM FORMULATION Let y ∈ RN be a piecewise constant signal observed in additive noise. Consider the objective function F : RN → R,...

Recovery of Discontinuous Signals Using Group Sparse Higher Degree Total Variation

by Greg Ongie , Student Member, IEEE Mathews Jacob
"... Abstract-We introduce a family of novel regularization penalties to enable the recovery of discrete discontinuous piecewise polynomial signals from undersampled or degraded linear measurements. The penalties promote the group sparsity of the signal analyzed under a nth order derivative. We introduc ..."
Abstract - Add to MetaCart
Abstract-We introduce a family of novel regularization penalties to enable the recovery of discrete discontinuous piecewise polynomial signals from undersampled or degraded linear measurements. The penalties promote the group sparsity of the signal analyzed under a nth order derivative. We introduce an efficient alternating minimization algorithm to solve linear inverse problems regularized with the proposed penalties. Our experiments show that promoting group sparsity of derivatives enhances the compressed sensing recovery of discontinuous piecewise linear signals compared with an unstructured sparse prior. We also propose an extension to 2-D, which can be viewed as a group sparse version of higher degree total variation, and illustrate its effectiveness in denoising experiments.
(Show Context)

Citation Context

...wed as group sparse versions of our recently introduced higher degree total variation (HDTV) penalties [5] and their generalizations [9]. Note that neither [5] nor [9] address the case of group sparse derivatives or non-convex penalties. Our experiments show the group sparse HDTV penalty improves over regular HDTV and other related higher order TV penalties in denoising natural images. A. Related work Many other researchers have proposed non-convex penalties for promoting group sparsity, showing substantial improvements over convex formulations, which motivates their use in this work. Notably [10] investigates an overlapping group sparsity prior for denoising, and proposes using certain nonconvex penalty functions such that the overall cost function is convex. While this approach has advantages over the fully non-convex formulation that we pursue, the theory in [10] is not easily extended the general linear inverse problem setting we consider, nor to a general analysis prior. See also [11] for a non-convex group sparse analysis approach to compressive color imaging. This work also has similarities to [12] which introduces a group sparse version of 1-D TV to allow for the recovery of sm...

Artifact-free Wavelet Denoising: Non-convex Sparse Regularization, Convex Optimization

by Yin Ding, Ivan W. Selesnick
"... Abstract—Algorithms for signal denoising that combine wavelet-domain sparsity and total variation (TV) regularization are relatively free of artifacts, such as pseudo-Gibbs oscillations, normally introduced by pure wavelet thresholding. This paper formulates wavelet-TV (WATV) denoising as a unified ..."
Abstract - Add to MetaCart
Abstract—Algorithms for signal denoising that combine wavelet-domain sparsity and total variation (TV) regularization are relatively free of artifacts, such as pseudo-Gibbs oscillations, normally introduced by pure wavelet thresholding. This paper formulates wavelet-TV (WATV) denoising as a unified problem. To strongly induce wavelet sparsity, the proposed approach uses non-convex penalty functions. At the same time, in order to draw on the advantages of convex optimization (unique minimum, re-liable algorithms, simplified regularization parameter selection), the non-convex penalties are chosen so as to ensure the convexity of the total objective function. A computationally efficient, fast converging algorithm is derived. I.
(Show Context)

Citation Context

...an [4] and Nikolova [27], [28], [31]. The use of semidefinite programming (SDP) to attain such a regularizer has been considered [35]. Recently, the concept has been applied to group-sparse denoising =-=[12]-=-, TV denoising [36], and non-convex fused-lasso [3]. II. PROBLEM FORMULATION We consider the estimation of a signal x ∈ RN observed in additive white Gaussian noise (AWGN), yn = xn + vn, n = 0, 1, . ....

unknown title

by unknown authors , 1120
"... es an ..."
Abstract - Add to MetaCart
Abstract not found

Study of Computed Tomographic Image Reconstruction employing different Norms for Regularization

by Mahipal Singh Parmar, Vinith Rejathalal, V. K. Govindan
"... Computed Tomography (CT) is the most popular medical imaging technique and it is used to generate images from projections of internal structure of the body. For reconstruction of images, we have a set of projections captured by the CT scan machine, from different angles, as input. Sufficient number ..."
Abstract - Add to MetaCart
Computed Tomography (CT) is the most popular medical imaging technique and it is used to generate images from projections of internal structure of the body. For reconstruction of images, we have a set of projections captured by the CT scan machine, from different angles, as input. Sufficient number of projections is required to compute a high quality image. However, heavy dose of x-rays required for the task which is harmful for the patients. Hence, it is necessary to look for alternate methods that provide images of acceptable qualities even with a few number of projections, thereby eliminating the harmful effect of high dose x-rays. Among the many algorithms available in the literature for reconstruction, p-norm based minimization is a popular approach. The quality of reconstructed image is highly dependent on the index of  p − norm. In this paper, we study the performances of reconstruction for different values of p, that is, we are reconstructing images for L0-norm, L1/2-norm, L1-norm and L2-norm, and we are comparing these reconstructed images for same set of projection data. It is observed that L0-norm based reconstruction provides the sparsest image and L1/2-norm based reconstruction provides the most accurate image.
(Show Context)

Citation Context

...zer reports these multiple minimum solutions as feasible solution but stillscan't predict about the optimum solution and it is hard for optimizer to give optimum solution fromsthese feasible solutions=-=[5]-=-.s(a)s(b)sFigure 1: (a) A non-convex graph; it have local minima and sharpness at many points. (b) Asconvex graph, it is smooth and have only one minimum.sIf ...

Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University