## Compressed sensing with quantized measurements (2010)

Citations: | 19 - 0 self |

### BibTeX

@MISC{Zymnis10compressedsensing,

author = {Argyrios Zymnis and Stephen Boyd and Emmanuel J. Candès},

title = {Compressed sensing with quantized measurements},

year = {2010}

}

### OpenURL

### Abstract

We consider the problem of estimating a sparse signal from a set of quantized, Gaussian noise corrupted measurements, where each measurement corresponds to an interval of values. We give two methods for (approximately) solving this problem, each based on minimizing a differentiable convex function plus an regularization term. Using a first order method developed by Hale et al, we demonstrate the performance of the methods through numerical simulation. We find that, using these methods, compressed sensing can be carried out even when the quantization is very coarse, e.g., 1 or 2 bits per measurement.

### Citations

3701 |
L.: Convex Optimization
- Boyd, Vandenberghe
- 2004
(Show Context)
Citation Context ...od as an approximation of the true maximum-likelihood penalty function. IV. A FIRST ORDER METHOD Problems of the form (1) can be solved using a variety of algorithms, including interior point methods =-=[18]-=-, [20], projected gradient methods [21], Bregman iterative regularization algoAuthorized licensed use limited to: Stanford University. Downloaded on March 10,2010 at 17:46:18 EST from IEEE Xplore. Res... |

1858 | Regression shrinkage and selection via the Lasso
- Tibshirani
- 1996
(Show Context)
Citation Context ...sensing [1]–[4]. The earliest documented use of based signal revovery is in deconvolution of seismic data [5], [6]. In statistics, the idea of regularization is used in the well known Lasso algorithm =-=[7]-=- for feature selection. Other uses of based methods include total variation denoising in image processing [8], [9], circuit design [10], [11], sparse portfolio optimization [12], and trend filtering [... |

1745 | Compressed sensing
- Donoho
(Show Context)
Citation Context ... of the figures in this paper are available online at http://ieeexplore.ieee.org. Digital Object Identifier 10.1109/LSP.2009.2035667 (1) measurements, known as compressed (or compressive) sensing [1]–=-=[4]-=-. The earliest documented use of based signal revovery is in deconvolution of seismic data [5], [6]. In statistics, the idea of regularization is used in the well known Lasso algorithm [7] for feature... |

1392 |
Nonlinear total variation based noise removal algorithms
- Rudin, Osher, et al.
- 1992
(Show Context)
Citation Context ..., [6]. In statistics, the idea of regularization is used in the well known Lasso algorithm [7] for feature selection. Other uses of based methods include total variation denoising in image processing =-=[8]-=-, [9], circuit design [10], [11], sparse portfolio optimization [12], and trend filtering [13]. Several recent papers address the problem of quantized compressed sensing. In [14], the authors consider... |

1318 | Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information - Candès, Romberg, et al. - 2006 |

750 |
Stable signal recovery from incomplete and inaccurate measurements
- Candès, Romberg, et al.
(Show Context)
Citation Context ...more of the figures in this paper are available online at http://ieeexplore.ieee.org. Digital Object Identifier 10.1109/LSP.2009.2035667 (1) measurements, known as compressed (or compressive) sensing =-=[1]-=-–[4]. The earliest documented use of based signal revovery is in deconvolution of seismic data [5], [6]. In statistics, the idea of regularization is used in the well known Lasso algorithm [7] for fea... |

750 | Least angle regression
- Efron, Hastie, et al.
- 2004
(Show Context)
Citation Context ...d University. Downloaded on March 10,2010 at 17:46:18 EST from IEEE Xplore. Restrictions apply.ZYMNIS et al.: COMPRESSEDSENSINGWITHQUANTIZEDMEASUREMENTS 151 rithms [22], [23], homotopy methods [24], =-=[25]-=-, and a first order method based on Nesterov’s work [26]. Some of these methods use a homotopy or continuation algorithm, and so efficiently compute a good approximation of the regularization path, i.... |

302 | Just relax: Convex programming methods for identifying sparse signals - Tropp |

294 | Gradient Projection for Sparse Reconstruction: Application to Compressed Sensing and Other
- Figueiredo, Nowak, et al.
- 2007
(Show Context)
Citation Context ...mum-likelihood penalty function. IV. A FIRST ORDER METHOD Problems of the form (1) can be solved using a variety of algorithms, including interior point methods [18], [20], projected gradient methods =-=[21]-=-, Bregman iterative regularization algoAuthorized licensed use limited to: Stanford University. Downloaded on March 10,2010 at 17:46:18 EST from IEEE Xplore. Restrictions apply.ZYMNIS et al.: COMPRES... |

148 | The entire regularization path for the support vector machine
- Hastie, Rosset, et al.
(Show Context)
Citation Context ...tanford University. Downloaded on March 10,2010 at 17:46:18 EST from IEEE Xplore. Restrictions apply.ZYMNIS et al.: COMPRESSEDSENSINGWITHQUANTIZEDMEASUREMENTS 151 rithms [22], [23], homotopy methods =-=[24]-=-, [25], and a first order method based on Nesterov’s work [26]. Some of these methods use a homotopy or continuation algorithm, and so efficiently compute a good approximation of the regularization pa... |

111 | Color TV: total variation methods for restoration of vector-valued images
- Blomgren, Chan
- 1998
(Show Context)
Citation Context .... In statistics, the idea of regularization is used in the well known Lasso algorithm [7] for feature selection. Other uses of based methods include total variation denoising in image processing [8], =-=[9]-=-, circuit design [10], [11], sparse portfolio optimization [12], and trend filtering [13]. Several recent papers address the problem of quantized compressed sensing. In [14], the authors consider the ... |

92 | An Iterative Regularization Method for Total Variation-Based Image Restoration. Multiscale Modeling and Simulation
- Osher, Burger, et al.
(Show Context)
Citation Context ...ed licensed use limited to: Stanford University. Downloaded on March 10,2010 at 17:46:18 EST from IEEE Xplore. Restrictions apply.ZYMNIS et al.: COMPRESSEDSENSINGWITHQUANTIZEDMEASUREMENTS 151 rithms =-=[22]-=-, [23], homotopy methods [24], [25], and a first order method based on Nesterov’s work [26]. Some of these methods use a homotopy or continuation algorithm, and so efficiently compute a good approxima... |

74 | Nesta: a fast and accurate first-order method for sparse recovery
- Becker, Bobin, et al.
(Show Context)
Citation Context ...T from IEEE Xplore. Restrictions apply.ZYMNIS et al.: COMPRESSEDSENSINGWITHQUANTIZEDMEASUREMENTS 151 rithms [22], [23], homotopy methods [24], [25], and a first order method based on Nesterov’s work =-=[26]-=-. Some of these methods use a homotopy or continuation algorithm, and so efficiently compute a good approximation of the regularization path, i.e., the solution of problem (1) as varies. We describe h... |

64 |
Robust modeling with erratic data
- Claerbout, Muir
- 1973
(Show Context)
Citation Context ...ct Identifier 10.1109/LSP.2009.2035667 (1) measurements, known as compressed (or compressive) sensing [1]–[4]. The earliest documented use of based signal revovery is in deconvolution of seismic data =-=[5]-=-, [6]. In statistics, the idea of regularization is used in the well known Lasso algorithm [7] for feature selection. Other uses of based methods include total variation denoising in image processing ... |

63 |
Applications of a splitting algorithm to decomposition in convex programming and variational inequalities
- Tseng
- 1991
(Show Context)
Citation Context ...e solution of problem (1) as varies. We describe here a simple first order method due to Hale et al.[27], which is a special case of a forward-backward splitting algorithm for solving convex problems =-=[28]-=-, [29]. We start from the optimality conditions for (1). Using subdifferential calculus, we obtain the following necessary and sufficient conditions for to be optimal for (1): These optimality conditi... |

48 | A Modified Forward-Backward Splitting Method For Maximal Monotone Mappings
- Tseng
(Show Context)
Citation Context ...tion of problem (1) as varies. We describe here a simple first order method due to Hale et al.[27], which is a special case of a forward-backward splitting algorithm for solving convex problems [28], =-=[29]-=-. We start from the optimality conditions for (1). Using subdifferential calculus, we obtain the following necessary and sufficient conditions for to be optimal for (1): These optimality conditions te... |

46 |
Portfolio optimization with linear and fixed transaction costs
- Lobo, Fazel, et al.
- 2007
(Show Context)
Citation Context ...l known Lasso algorithm [7] for feature selection. Other uses of based methods include total variation denoising in image processing [8], [9], circuit design [10], [11], sparse portfolio optimization =-=[12]-=-, and trend filtering [13]. Several recent papers address the problem of quantized compressed sensing. In [14], the authors consider the extreme case of sign (i.e., 1-bit) measurements, and propose an... |

33 | 1-bit compressive sensing
- Boufounos, Baraniuk
(Show Context)
Citation Context ...ng in image processing [8], [9], circuit design [10], [11], sparse portfolio optimization [12], and trend filtering [13]. Several recent papers address the problem of quantized compressed sensing. In =-=[14]-=-, the authors consider the extreme case of sign (i.e., 1-bit) measurements, and propose an algorithm based on minimizing an -regularized one-sided quadratic function. Quantized compressed sensing, whe... |

28 | Optimal wire and transistor sizing for circuits with non-tree topology
- Vandenberghe, Boyd, et al.
- 1997
(Show Context)
Citation Context ... idea of regularization is used in the well known Lasso algorithm [7] for feature selection. Other uses of based methods include total variation denoising in image processing [8], [9], circuit design =-=[10]-=-, [11], sparse portfolio optimization [12], and trend filtering [13]. Several recent papers address the problem of quantized compressed sensing. In [14], the authors consider the extreme case of sign ... |

27 | Fadili, “Dequantizing compressed sensing: When oversampling and non-Gaussian constraints combine
- Jacques, Hammond, et al.
- 2011
(Show Context)
Citation Context ...surements, and propose an algorithm based on minimizing an -regularized one-sided quadratic function. Quantized compressed sensing, where quantization effects dominate noise effects, is considered in =-=[15]-=-; the authors propose a variant of basis pursuit denoising, based on using an norm rather than an norm, and prove that the algorithm performance improves with larger . In [16], an adaptation of basis ... |

21 | l1 trend filtering
- Kim, Koh, et al.
(Show Context)
Citation Context ...] for feature selection. Other uses of based methods include total variation denoising in image processing [8], [9], circuit design [10], [11], sparse portfolio optimization [12], and trend filtering =-=[13]-=-. Several recent papers address the problem of quantized compressed sensing. In [14], the authors consider the extreme case of sign (i.e., 1-bit) measurements, and propose an algorithm based on minimi... |

21 |
Distortion-rate functions for quantized compressive sensing
- Dai, Pham, et al.
- 2009
(Show Context)
Citation Context ...ffects, is considered in [15]; the authors propose a variant of basis pursuit denoising, based on using an norm rather than an norm, and prove that the algorithm performance improves with larger . In =-=[16]-=-, an adaptation of basis pursuit denoising and subspace sampling is proposed for dealing with quantized measurements. In all of this work, the focus is on the effect of quantization; in this paper, we... |

18 |
A study of rough amplitude quantization by means of Nyquist sampling theory
- Widrow
- 1956
(Show Context)
Citation Context ...ch quantization interval, and assume that the real value is the unquantized, but noise corrupted measurement. For the measureFor the case of a uniform (assumed) distribution on , we have ; see, e.g., =-=[19]-=-. Now we take the approximation one step further, and pretend that is Gaussian. Under this approximation we have , where . We can now use least-squares to estimate , by minimizing the (convex quadrati... |

15 | Optimizing dominant time constant in RC circuits
- Vandenberghe, Boyd, et al.
- 1996
(Show Context)
Citation Context ...of regularization is used in the well known Lasso algorithm [7] for feature selection. Other uses of based methods include total variation denoising in image processing [8], [9], circuit design [10], =-=[11]-=-, sparse portfolio optimization [12], and trend filtering [13]. Several recent papers address the problem of quantized compressed sensing. In [14], the authors consider the extreme case of sign (i.e.,... |

12 | Relaxed maximum a posteriori fault identification
- Zymnis, Boyd, et al.
(Show Context)
Citation Context ...e can have , or , when the interval is infinite. Thus, our measurements tell us that where and are the lower and upper limits for the observed codewords. This model is very similar to the one used in =-=[17]-=- for quantized measurements in the context of fault estimation. 1070-9908/$26.00 © 2009 IEEE Authorized licensed use limited to: Stanford University. Downloaded on March 10,2010 at 17:46:18 EST from I... |

7 |
Bregman iterative algorithms for ` -minimization with applications to compressed sensing
- Yin, Osher, et al.
- 2008
(Show Context)
Citation Context ...ensed use limited to: Stanford University. Downloaded on March 10,2010 at 17:46:18 EST from IEEE Xplore. Restrictions apply.ZYMNIS et al.: COMPRESSEDSENSINGWITHQUANTIZEDMEASUREMENTS 151 rithms [22], =-=[23]-=-, homotopy methods [24], [25], and a first order method based on Nesterov’s work [26]. Some of these methods use a homotopy or continuation algorithm, and so efficiently compute a good approximation o... |

3 |
A method for large-scale ` -regularized least-squares problems with applications in signal processing and statistics,” 2007, submitted for publication
- Kim, Koh, et al.
(Show Context)
Citation Context ... least-squares to estimate , by minimizing the (convex quadratic) function , where To obtain a sparse estimate, we add regularization, and minimize . This problem is the same as the one considered in =-=[20]-=-. C. Penalty Comparison Fig. 1 shows a comparison of the two different penalty functions used in our two methods, for a single measurement with , , , and . We assume that the distribution of the unqua... |

3 |
Fixed-point continuation for minimization: Methodology and convergence
- Hale, Yin, et al.
- 2008
(Show Context)
Citation Context ...tion algorithm, and so efficiently compute a good approximation of the regularization path, i.e., the solution of problem (1) as varies. We describe here a simple first order method due to Hale et al.=-=[27]-=-, which is a special case of a forward-backward splitting algorithm for solving convex problems [28], [29]. We start from the optimality conditions for (1). Using subdifferential calculus, we obtain t... |

2 |
Deconvolution with the ` norm
- Taylor, Banks, et al.
- 1979
(Show Context)
Citation Context ...entifier 10.1109/LSP.2009.2035667 (1) measurements, known as compressed (or compressive) sensing [1]–[4]. The earliest documented use of based signal revovery is in deconvolution of seismic data [5], =-=[6]-=-. In statistics, the idea of regularization is used in the well known Lasso algorithm [7] for feature selection. Other uses of based methods include total variation denoising in image processing [8], ... |