Results 1  10
of
16
Probing the Pareto frontier for basis pursuit solutions
, 2008
"... The basis pursuit problem seeks a minimum onenorm solution of an underdetermined leastsquares problem. Basis pursuit denoise (BPDN) fits the leastsquares problem only approximately, and a single parameter determines a curve that traces the optimal tradeoff between the leastsquares fit and the ..."
Abstract

Cited by 157 (2 self)
 Add to MetaCart
The basis pursuit problem seeks a minimum onenorm solution of an underdetermined leastsquares problem. Basis pursuit denoise (BPDN) fits the leastsquares problem only approximately, and a single parameter determines a curve that traces the optimal tradeoff between the leastsquares fit and the onenorm of the solution. We prove that this curve is convex and continuously differentiable over all points of interest, and show that it gives an explicit relationship to two other optimization problems closely related to BPDN. We describe a rootfinding algorithm for finding arbitrary points on this curve; the algorithm is suitable for problems that are large scale and for those that are in the complex domain. At each iteration, a spectral gradientprojection method approximately minimizes a leastsquares problem with an explicit onenorm constraint. Only matrixvector operations are required. The primaldual solution of this problem gives function and derivative information needed for the rootfinding method. Numerical experiments on a comprehensive set of test problems demonstrate that the method scales well to large problems.
Sensing by Random Convolution
 IEEE Int. Work. on Comp. Adv. MultiSensor Adaptive Proc., CAMPSAP
, 2007
"... Abstract. This paper outlines a new framework for compressive sensing: convolution with a random waveform followed by random time domain subsampling. We show that sensing by random convolution is a universally efficient data acquisition strategy in that an ndimensional signal which is S sparse in a ..."
Abstract

Cited by 65 (4 self)
 Add to MetaCart
Abstract. This paper outlines a new framework for compressive sensing: convolution with a random waveform followed by random time domain subsampling. We show that sensing by random convolution is a universally efficient data acquisition strategy in that an ndimensional signal which is S sparse in any fixed representation can be recovered from m � S log n measurements. We discuss two imaging scenarios — radar and Fourier optics — where convolution with a random pulse allows us to seemingly superresolve finescale features, allowing us to recover highresolution signals from lowresolution measurements. 1. Introduction. The new field of compressive sensing (CS) has given us a fresh look at data acquisition, one of the fundamental tasks in signal processing. The message of this theory can be summarized succinctly [7, 8, 10, 15, 32]: the number of measurements we need to reconstruct a signal depends on its sparsity rather than its bandwidth. These measurements, however, are different than the samples that
InformationTheoretically Optimal Compressed Sensing via Spatial Coupling and Approximate Message Passing
, 1112
"... We study the compressed sensing reconstruction problem for a broad class of random, banddiagonal sensing matrices. This construction is inspired by the idea of spatial coupling in coding theory. As demonstrated heuristically and numerically by Krzakala et al. [KMS+ 11], message passing algorithms ca ..."
Abstract

Cited by 8 (3 self)
 Add to MetaCart
We study the compressed sensing reconstruction problem for a broad class of random, banddiagonal sensing matrices. This construction is inspired by the idea of spatial coupling in coding theory. As demonstrated heuristically and numerically by Krzakala et al. [KMS+ 11], message passing algorithms can effectively solve the reconstruction problem for spatially coupled measurements with undersampling rates close to the fraction of nonzero coordinates. We use an approximate message passing (AMP) algorithm and analyze it through the state evolution method. We give a rigorous proof that this approach is successful as soon as the undersampling rate δ exceeds the (upper) Rényi information dimension of the signal, d(pX). More precisely, for a sequence of signals of diverging dimension n whose empirical distribution converges to pX, reconstruction is with high probability successful from d(pX) n + o(n) measurements taken according to a band diagonal matrix. For sparse signals, i.e. sequences of dimension n and k(n) nonzero entries, this implies reconstruction from k(n)+o(n) measurements. For ‘discrete ’ signals, i.e. signals whose coordinates take a fixed finite set of values, this implies reconstruction from o(n) measurements. The result
Sparse representations, compressive sensing and dictionaries for pattern recognition
 in Asian Conference on Pattern Recognition (ACPR
, 2011
"... Abstract—In recent years, the theories of Compressive Sensing ..."
Abstract

Cited by 6 (5 self)
 Add to MetaCart
Abstract—In recent years, the theories of Compressive Sensing
Thresholded basis pursuit: Support recovery for sparse and approximately sparse signals,” arXiv:0809.4883v2
"... In this paper we present a linear programming solution for support recovery. Support recovery involves the estimation of sign pattern of a sparse signal from a set of randomly projected noisy measurements. Our solution of the problem amounts to solving min ‖Z‖1 s.t. Y = GZ, and quantizing/thresholdi ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
In this paper we present a linear programming solution for support recovery. Support recovery involves the estimation of sign pattern of a sparse signal from a set of randomly projected noisy measurements. Our solution of the problem amounts to solving min ‖Z‖1 s.t. Y = GZ, and quantizing/thresholding the resulting solution Z. We show that this scheme is guaranteed to perfectly reconstruct a discrete signal or control the elementwise reconstruction error for a continuous signal for specific values of sparsity. We show that the sign pattern of X can be recovered with SNR = O(log n) and m = O(k log n/k) measurements, where k is the sparsity level and satisfies 0 < k ≤ αn, where, α is some nonzero constant. Our proof technique is based on perturbation of the noiseless ℓ1 problem. Consequently, the maximum achievable sparsity level in the noisy problem is comparable to that of the noiseless problem. Our result offers a sharp characterization in that neither the SNR nor the sparsity ratio can be significantly improved. In contrast previous results based on LASSO and MAXCorrelation techniques either assume significantly larger SNR or sublinear sparsity. Our results has implications for approximately sparse problems. We show that the k largest coefficients of a nonsparse signal X can be recovered from m = O(k log n/k) random projections for certain classes of signals. 1
Distributed Sensor Perception via Sparse Representation
 THE PROCEEDINGS OF IEEE
"... Sensor network scenarios are considered where the underlying signals of interest exhibit a degree of sparsity, which means that in an appropriate basis, they can be expressed in terms of a small number of nonzero coefficients. Following the emerging theory of compressive sensing, an overall architec ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
Sensor network scenarios are considered where the underlying signals of interest exhibit a degree of sparsity, which means that in an appropriate basis, they can be expressed in terms of a small number of nonzero coefficients. Following the emerging theory of compressive sensing, an overall architecture is considered where the sensors acquire potentially noisy projections of the data, and the underlying sparsity is exploited to recover useful information about the signals of interest, which will be referred to as distributed sensor perception. First, we discuss the question of which projections of the data should be acquired, and how many of them. Then, we discuss how to take advantage of possible joint sparsity of the signals acquired by multiple sensors, and show how this can further improve the inference of the events from the sensor network. Two practical sensor applications are demonstrated, namely, distributed wearable action recognition using lowpower motion sensors and distributed object recognition using highpower camera sensors. Experimental data support the utility of the compressive sensing framework in distributed sensor perception.
On the Performance of Compressive Video Streaming for Wireless Multimedia Sensor Networks
"... Abstract—This paper investigates the potential of the compressed sensing (CS) paradigm for video streaming in Wireless Multimedia Sensor Networks. The objective is to study performance limits and outline key design principles that will be the basis for crosslayer protocol stacks for efficient trans ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Abstract—This paper investigates the potential of the compressed sensing (CS) paradigm for video streaming in Wireless Multimedia Sensor Networks. The objective is to study performance limits and outline key design principles that will be the basis for crosslayer protocol stacks for efficient transport of compressive video streams. Hence, this paper investigates the effect of key video parameters (i.e., quantization, CS samples per frame, and channel encoding rate) on the received video quality of CS images transmitted through a wireless channels. It is shown that, unlike JPEGencoded images, CSencoded images exhibit an inherent resiliency to channel errors, caused by the unstructured image representation; this leads to basically zero loss in image quality for random channel bit error rates as high as 10 −4, and low degradation up to 10 −3. Furthermore, it is shown how, unlike traditional wireless imaging systems, forward error correction is not beneficial for wireless transmission of CS images. Instead, an adaptive parity scheme that drops samples in error is proposed and shown to improve image quality. Finally, a lowcomplexity, adaptive video encoder, is proposed that performs lowcomplexity motion estimation on sensors, thus greatly reducing the amount of data to be transmitted. I.
EXACT REGULARIZATION OF LINEAR PROGRAMS
, 2005
"... Abstract. We show that linear programs (LPs) admit regularizations that either contract the original (primal) solution set or leave it unchanged. Any regularization function that is convex and has compact level sets is allowed; differentiability is not required. This is an extension of the result fi ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Abstract. We show that linear programs (LPs) admit regularizations that either contract the original (primal) solution set or leave it unchanged. Any regularization function that is convex and has compact level sets is allowed; differentiability is not required. This is an extension of the result first described by Mangasarian and Meyer (SIAM J. Control Optim., 17(6), pp. 745–752, 1979). We show that there always exist positive values of the regularization parameter such that a solution of the regularized problem simultaneously minimizes the original LP and minimizes the regularization function over the original solution set. We illustrate the main result using the nondifferentiable ℓ1 regularization function on a set of degenerate LPs. Numerical results demonstrate how such an approach yields sparse solutions from the application of an interiorpoint method.
WynerZiv Image Coding from Random Projections
"... Abstract — In this paper, we present a WynerZiv coding based on random projections for image compression with side information at the decoder. The proposed coder consists of random projections (RPs), nested scalar quantization (NSQ), and SlepianWolf coding (SWC). Most of natural images are compres ..."
Abstract
 Add to MetaCart
Abstract — In this paper, we present a WynerZiv coding based on random projections for image compression with side information at the decoder. The proposed coder consists of random projections (RPs), nested scalar quantization (NSQ), and SlepianWolf coding (SWC). Most of natural images are compressible or sparse in the sense that they are wellapproximated by a linear combination of a few coefficients taken from a known basis, e.g., FFT or Wavelet basis. Recent results show that it is surprisingly possible to reconstruct compressible signal to within very high accuracy from limited random projections by solving a simple convex optimization program. Nested quantization provides a practical scheme for lossy source coding with side information at the decoder to achieve further compression. SWC is lossless source coding with side information at the decoder. In this paper, ideal SWC is assumed, thus rates are conditional entropies of NSQ quantization indices. Recently theoretical analysis shows that for the quadratic Gaussian case and at high rate, NSQ with ideal SWC performs the same as conventional entropycoded quantization with side information available at both the encoder and decoder. We note that the measurements of random projects for a natural largesize image can behave like Gaussian random variables because most of random measurement matrices behave like Gaussian ones if their sizes are large. Hence, by combining random projections with NSQ and SWC, the tradeoff between compression rate and distortion will be improved. Simulation results support the proposed joint codec design and demonstrate considerable performance of the proposed compression systems. I.
Rank Smoothness
"... • When A has less rows than columns, there are an infinite number of solutions. • Which one should be selected? OR: Mining for Biomarkers • npatients << npeaks • If very few are needed for diagnosis, search for a sparse set of markers • l1, LASSO, etc. Recommender SystemsNetflix Prize • One million ..."
Abstract
 Add to MetaCart
• When A has less rows than columns, there are an infinite number of solutions. • Which one should be selected? OR: Mining for Biomarkers • npatients << npeaks • If very few are needed for diagnosis, search for a sparse set of markers • l1, LASSO, etc. Recommender SystemsNetflix Prize • One million big ones! • Given 100 million ratings on a scale of 1 to 5, predict 3 million ratings to highest accuracy • 17770 total movies x 480189 total users • Over 8 billion total ratings • How to fill in the blanks? Abstract Setup: Matrix Completion X = X ij known for black cells X ij unknown for white cells Rows index movies Columns index users • How do you fill in the missing data?