Results 1  10
of
35
Guaranteed minimumrank solutions of linear matrix equations via nuclear norm minimization
, 2007
"... The affine rank minimization problem consists of finding a matrix of minimum rank that satisfies a given system of linear equality constraints. Such problems have appeared in the literature of a diverse set of fields including system identification and control, Euclidean embedding, and collaborative ..."
Abstract

Cited by 219 (14 self)
 Add to MetaCart
The affine rank minimization problem consists of finding a matrix of minimum rank that satisfies a given system of linear equality constraints. Such problems have appeared in the literature of a diverse set of fields including system identification and control, Euclidean embedding, and collaborative filtering. Although specific instances can often be solved with specialized algorithms, the general affine rank minimization problem is NPhard, because it contains vector cardinality minimization as a special case. In this paper, we show that if a certain restricted isometry property holds for the linear transformation defining the constraints, the minimum rank solution can be recovered by solving a convex optimization problem, namely the minimization of the nuclear norm over the given affine space. We present several random ensembles of equations where the restricted isometry property holds with overwhelming probability, provided the codimension of the subspace is sufficiently large. The techniques used in our analysis have strong parallels in the compressed sensing framework. We discuss how affine rank minimization generalizes this preexisting concept and outline a dictionary relating concepts from cardinality minimization to those of rank minimization. We also discuss several algorithmic approaches to solving the norm minimization relaxations, and illustrate our results with numerical examples.
Distributed compressed sensing
, 2005
"... Compressed sensing is an emerging field based on the revelation that a small collection of linear projections of a sparse signal contains enough information for reconstruction. In this paper we introduce a new theory for distributed compressed sensing (DCS) that enables new distributed coding algori ..."
Abstract

Cited by 85 (21 self)
 Add to MetaCart
Compressed sensing is an emerging field based on the revelation that a small collection of linear projections of a sparse signal contains enough information for reconstruction. In this paper we introduce a new theory for distributed compressed sensing (DCS) that enables new distributed coding algorithms for multisignal ensembles that exploit both intra and intersignal correlation structures. The DCS theory rests on a new concept that we term the joint sparsity of a signal ensemble. We study in detail three simple models for jointly sparse signals, propose algorithms for joint recovery of multiple signals from incoherent projections, and characterize theoretically and empirically the number of measurements per sensor required for accurate reconstruction. We establish a parallel with the SlepianWolf theorem from information theory and establish upper and lower bounds on the measurement rates required for encoding jointly sparse signals. In two of our three models, the results are asymptotically bestpossible, meaning that both the upper and lower bounds match the performance of our practical algorithms. Moreover, simulations indicate that the asymptotics take effect with just a moderate number of signals. In some sense DCS is a framework for distributed compression of sources with memory, which has remained a challenging problem for some time. DCS is immediately applicable to a range of problems in sensor networks and arrays.
A unified framework for highdimensional analysis of Mestimators with decomposable regularizers
"... ..."
An accelerated proximal gradient algorithm for nuclear norm regularized least squares problems
, 2009
"... ..."
Bregman iterative algorithms for ℓ1minimization with applications to compressed sensing
 SIAM J. Imaging Sci
, 2008
"... Abstract. We propose simple and extremely efficient methods for solving the basis pursuit problem min{‖u‖1: Au = f,u ∈ R n}, which is used in compressed sensing. Our methods are based on Bregman iterative regularization, and they give a very accurate solution after solving only a very small number o ..."
Abstract

Cited by 62 (14 self)
 Add to MetaCart
Abstract. We propose simple and extremely efficient methods for solving the basis pursuit problem min{‖u‖1: Au = f,u ∈ R n}, which is used in compressed sensing. Our methods are based on Bregman iterative regularization, and they give a very accurate solution after solving only a very small number of 1 instances of the unconstrained problem minu∈Rn μ‖u‖1 + 2 ‖Au−fk ‖ 2 2 for given matrix A and vector f k. We show analytically that this iterative approach yields exact solutions in a finite number of steps and present numerical results that demonstrate that as few as two to six iterations are sufficient in most cases. Our approach is especially useful for many compressed sensing applications where matrixvector operations involving A and A ⊤ can be computed by fast transforms. Utilizing a fast fixedpoint continuation solver that is based solely on such operations for solving the above unconstrained subproblem, we were able to quickly solve huge instances of compressed sensing problems on a standard PC.
Fast Linearized Bregman Iteration for Compressed Sensing
 and Sparse Denoising, 2008. UCLA CAM Reprots
, 2008
"... Abstract. Finding a solution of a linear equation Au = f with various minimization properties arises from many applications. One of such applications is compressed sensing, where an efficient and robusttonoise algorithm to find a minimal ℓ1 norm solution is needed. This means that the algorithm sh ..."
Abstract

Cited by 58 (16 self)
 Add to MetaCart
Abstract. Finding a solution of a linear equation Au = f with various minimization properties arises from many applications. One of such applications is compressed sensing, where an efficient and robusttonoise algorithm to find a minimal ℓ1 norm solution is needed. This means that the algorithm should be tailored for large scale and completely dense matrices A, while Au and A T u can be computed by fast transforms and the solution to seek is sparse. Recently, a simple and fast algorithm based on linearized Bregman iteration was proposed in [28, 32] for this purpose. This paper is to analyze the convergence of linearized Bregman iterations and the minimization properties of their limit. Based on our analysis here, we derive also a new algorithm that is proven to be convergent with a rate. Furthermore, the new algorithm is as simple and fast as the algorithm given in [28, 32] in approximating a minimal ℓ1 norm solution of Au = f as shown by numerical simulations. Hence, it can be used as another choice of an efficient tool in compressed sensing. 1. Introduction. Let A ∈ R m×n with n> m and f ∈ R m be given. The aim of a basis pursuit problem is to find u ∈ R n by solving the following constrained minimization problem min
Bayesian Compressed Sensing via Belief Propagation
, 2010
"... Compressive sensing (CS) is an emerging field based on the revelation that a small collection of linear projections of a sparse signal contains enough information for stable, subNyquist signal acquisition. When a statistical characterization of the signal is available, Bayesian inference can comple ..."
Abstract

Cited by 56 (13 self)
 Add to MetaCart
Compressive sensing (CS) is an emerging field based on the revelation that a small collection of linear projections of a sparse signal contains enough information for stable, subNyquist signal acquisition. When a statistical characterization of the signal is available, Bayesian inference can complement conventional CS methods based on linear programming or greedy algorithms. We perform asymptotically optimal Bayesian inference using belief propagation (BP) decoding, which represents the CS encoding matrix as a graphical model. Fast computation is obtained by reducing the size of the graphical model with sparse encoding matrices. To decode a length signal containing large coefficients, our CSBP decoding algorithm uses ( log ()) measurements and ( log 2 ()) computation. Finally, although we focus on a twostate mixture Gaussian model, CSBP is easily adapted to other signal models.
Compressed Sensing Reconstruction via Belief Propagation
, 2006
"... Compressed sensing is an emerging field that enables to reconstruct sparse or compressible signals from a small number of linear projections. We describe a specific measurement scheme using an LDPClike measurement matrix, which is a realvalued analogue to LDPC techniques over a finite alphabet. We ..."
Abstract

Cited by 40 (8 self)
 Add to MetaCart
Compressed sensing is an emerging field that enables to reconstruct sparse or compressible signals from a small number of linear projections. We describe a specific measurement scheme using an LDPClike measurement matrix, which is a realvalued analogue to LDPC techniques over a finite alphabet. We then describe the reconstruction details for mixture Gaussian signals. The technique can be extended to additional compressible signal models. 1
A Singleletter Characterization of Optimal Noisy Compressed Sensing
"... Abstract—Compressed sensing deals with the reconstruction of a highdimensional signal from far fewer linear measurements, where the signal is known to admit a sparse representation in a certain linear space. The asymptotic scaling of the number of measurements needed for reconstruction as the dimen ..."
Abstract

Cited by 25 (9 self)
 Add to MetaCart
Abstract—Compressed sensing deals with the reconstruction of a highdimensional signal from far fewer linear measurements, where the signal is known to admit a sparse representation in a certain linear space. The asymptotic scaling of the number of measurements needed for reconstruction as the dimension of the signal increases has been studied extensively. This work takes a fundamental perspective on the problem of inferring about individual elements of the sparse signal given the measurements, where the dimensions of the system become increasingly large. Using the replica method, the outcome of inferring about any fixed collection of signal elements is shown to be asymptotically decoupled, i.e., those elements become independent conditioned on the measurements. Furthermore, the problem of inferring about each signal element admits a singleletter characterization in the sense that the posterior distribution of the element, which is a sufficient statistic, becomes asymptotically identical to the posterior of inferring about the same element in scalar Gaussian noise. The result leads to simple characterization of all other elemental metrics of the compressed sensing problem, such as the mean squared error and the error probability for reconstructing the support set of the sparse signal. Finally, the singleletter characterization is rigorously justified in the special case of sparse measurement matrices where belief propagation becomes asymptotically optimal. I.
Identification of matrices having a sparse representation
, 2007
"... We consider the problem of recovering a matrix from its action on a known vector in the setting where the matrix can be represented efficiently in a known matrix dictionary. Connections with sparse signal recovery allows for the use of efficient reconstruction techniques such as Basis Pursuit (BP). ..."
Abstract

Cited by 25 (7 self)
 Add to MetaCart
We consider the problem of recovering a matrix from its action on a known vector in the setting where the matrix can be represented efficiently in a known matrix dictionary. Connections with sparse signal recovery allows for the use of efficient reconstruction techniques such as Basis Pursuit (BP). Of particular interest is the dictionary of timefrequency shift matrices and its role for channel estimation and identification in communications engineering. We present recovery results for BP with the timefrequency shift dictionary and various dictionaries of random matrices.