Results 1  10
of
17
Iterative hard thresholding for compressed sensing
 Appl. Comp. Harm. Anal
"... Compressed sensing is a technique to sample compressible signals below the Nyquist rate, whilst still allowing near optimal reconstruction of the signal. In this paper we present a theoretical analysis of the iterative hard thresholding algorithm when applied to the compressed sensing recovery probl ..."
Abstract

Cited by 324 (18 self)
 Add to MetaCart
(Show Context)
Compressed sensing is a technique to sample compressible signals below the Nyquist rate, whilst still allowing near optimal reconstruction of the signal. In this paper we present a theoretical analysis of the iterative hard thresholding algorithm when applied to the compressed sensing recovery problem. We show that the algorithm has the following properties (made more precise in the main text of the paper) • It gives nearoptimal error guarantees. • It is robust to observation noise. • It succeeds with a minimum number of observations. • It can be used with any sampling operator for which the operator and its adjoint can be computed. • The memory requirement is linear in the problem size. Preprint submitted to Elsevier 28 January 2009 • Its computational complexity per iteration is of the same order as the application of the measurement operator or its adjoint. • It requires a fixed number of iterations depending only on the logarithm of a form of signal to noise ratio of the signal. • Its performance guarantees are uniform in that they only depend on properties of the sampling operator and signal sparsity.
A simple, efficient and near optimal algorithm for compressed sensing
 in Proceedings of the Int. Conf. on Acoustics, Speech and Signal Processing
, 2009
"... When sampling signals below the Nyquist rate, efficient and accurate reconstruction is nevertheless possible, whenever the sampling system is well behaved and the signal is well approximated by a sparse vector. This statement has been formalised in the recently developed theory of compressed sensi ..."
Abstract

Cited by 8 (4 self)
 Add to MetaCart
(Show Context)
When sampling signals below the Nyquist rate, efficient and accurate reconstruction is nevertheless possible, whenever the sampling system is well behaved and the signal is well approximated by a sparse vector. This statement has been formalised in the recently developed theory of compressed sensing, which developed conditions on the sampling system and proved the performance of several efficient algorithms for signal reconstruction under these conditions. In this paper, we prove that a very simple and efficient algorithm, known as Iterative Hard Thresholding, has near optimal performance guarantees rivalling those derived for other state of the art approaches.
S.: Image Coding with Iterated Contourlet and Wavelet Transforms
 Proc. IEEE International Conf. on Image Processing, Singapore
, 2004
"... This paper presents a new coding technique based on a mixed contourlet and wavelet transform. The redundancy of the transform is controlled by using the contourlet at fine scales and by switching to a separable wavelet transform at coarse scales. The transform is then optimized through an iterative ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
(Show Context)
This paper presents a new coding technique based on a mixed contourlet and wavelet transform. The redundancy of the transform is controlled by using the contourlet at fine scales and by switching to a separable wavelet transform at coarse scales. The transform is then optimized through an iterative projection process in the transform domain in order to minimize the quantization error in the image domain. A gain of respectively up to 0.5dB and to 1 dB over respectively contourlet and wavelet based coding has been observed for images with directional features. 1.
HOW TO USE THE ITERATIVE HARD THRESHOLDING ALGORITHM
"... Several computationally efficient algorithms have been shown to offer near optimal recovery of sparse signals from a small number of linear measurements. However, whilst many of the methods have similar guarantees whenever the measurements satisfy the so called restricted isometry property, empirica ..."
Abstract

Cited by 5 (2 self)
 Add to MetaCart
Several computationally efficient algorithms have been shown to offer near optimal recovery of sparse signals from a small number of linear measurements. However, whilst many of the methods have similar guarantees whenever the measurements satisfy the so called restricted isometry property, empirical performance of the methods can vary significantly in a regime in which this condition is not satisfied. We here modify the Iterative Hard Thresholding algorithm by including an automatic stepsize calculation. This makes the method independent from an arbitrary scaling of the measurement system and leads to a method that shows state of the art empirical performance. What is more, theoretical guarantees derived for the unmodified algorithm carry over to the new method with only minor changes.
Embedded polar quantization
 IEEE Signal Processing Letters
, 2007
"... Abstract—Embedded polar quantization can be useful for progressive transmission of circularly symmetric data, e.g., for finegrain scalable coding of parametric audio. Sets of constrainedresolution embedded quantizers are built recursively by successive refinement processes, that are detailed for ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
(Show Context)
Abstract—Embedded polar quantization can be useful for progressive transmission of circularly symmetric data, e.g., for finegrain scalable coding of parametric audio. Sets of constrainedresolution embedded quantizers are built recursively by successive refinement processes, that are detailed for strict polar quantization and unrestricted polar quantization. The quadratic error minimization problem is solved using equations similar to those of Max, and the refinement algorithm can, in the unrestricted case, be simplified using a highrate approximation. For Gaussian data, comparisons with reference nonembedded quantizers show that the embedding property comes at an often negligible cost in terms of ratedistortion performance. Index Terms—Embedded quantization, quantizer design, scalable audio coding. I.
Coding overcomplete representations of audio using the MCLT
 in Proceedings of Data Compression Conference
, 2008
"... We propose a system for audio coding using the modulated complex lapped transform (MCLT). In general, it is difficult to encode signals using overcomplete representations without avoiding a penalty in ratedistortion performance. We show that the penalty can be significantly reduced for MCLTbased r ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
We propose a system for audio coding using the modulated complex lapped transform (MCLT). In general, it is difficult to encode signals using overcomplete representations without avoiding a penalty in ratedistortion performance. We show that the penalty can be significantly reduced for MCLTbased representations, without the need for iterative methods of sparsity reduction. We achieve that via a magnitudephase polar quantization and the use of magnitude and phase prediction. Compared to systems based on quantization of orthogonal representations such as the modulated lapped transform (MLT), the new system allows for reduced warbling artifacts and more precise computation of frequencydomain auditory masking functions. 1.
Analysis of Multi Resolution Image Denoising Scheme using Fractal Transform
"... For communication and storage efficiency, image data should be substantially Compressed. The compression ratio is limited by noise, which degrades the correlation between pixels. Noise can occur during image capture, transmission or processing, and may be dependent on or independent of image content ..."
Abstract
 Add to MetaCart
For communication and storage efficiency, image data should be substantially Compressed. The compression ratio is limited by noise, which degrades the correlation between pixels. Noise can occur during image capture, transmission or processing, and may be dependent on or independent of image content. This work proposes a novel algorithm which reduces noise in colour images. Simulation results proved that 86 % efficiency has been achieved, while considering 415 pixels.
Sparse Inverse Problem
"... When sampling signals below the Nyquist rate, efficient and accurate reconstruction is nevertheless possible, whenever the sampling system is well behaved and the signal is well approximated by a sparse vector. This statement has been formalised in the recently developed theory of compressed sensing ..."
Abstract
 Add to MetaCart
(Show Context)
When sampling signals below the Nyquist rate, efficient and accurate reconstruction is nevertheless possible, whenever the sampling system is well behaved and the signal is well approximated by a sparse vector. This statement has been formalised in the recently developed theory of compressed sensing, which developed conditions on the sampling system and proved the performance of several efficient algorithms for signal reconstruction under these conditions. In this paper, we prove that a very simple and efficient algorithm, known as Iterative Hard Thresholding, has near optimal performance guarantees rivalling those derived for other state of the art approaches.
Nonconvexly constrained linear inverse problems
"... Abstract. This paper considers the inversion of illposed linear operators. To regularise the problem the solution is enforced to lie in a nonconvex subset. Theoretical properties for the stable inversion are derived and an iterative algorithm akin to the projected Landweber algorithm is studied. T ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract. This paper considers the inversion of illposed linear operators. To regularise the problem the solution is enforced to lie in a nonconvex subset. Theoretical properties for the stable inversion are derived and an iterative algorithm akin to the projected Landweber algorithm is studied. This work extends recent progress made on the efficient inversion of finite dimensional linear systems under a sparsity constraint to the Hilbert space setting and to more general nonconvex constraints. Nonconvexly constrained linear inverse problems 2 1.