Results 1  10
of
20
A unified framework for highdimensional analysis of Mestimators with decomposable regularizers
"... ..."
The Convex Geometry of Linear Inverse Problems
, 2010
"... In applications throughout science and engineering one is often faced with the challenge of solving an illposed inverse problem, where the number of available measurements is smaller than the dimension of the model to be estimated. However in many practical situations of interest, models are constr ..."
Abstract

Cited by 43 (11 self)
 Add to MetaCart
In applications throughout science and engineering one is often faced with the challenge of solving an illposed inverse problem, where the number of available measurements is smaller than the dimension of the model to be estimated. However in many practical situations of interest, models are constrained structurally so that they only have a few degrees of freedom relative to their ambient dimension. This paper provides a general framework to convert notions of simplicity into convex penalty functions, resulting in convex optimization solutions to linear, underdetermined inverse problems. The class of simple models considered are those formed as the sum of a few atoms from some (possibly infinite) elementary atomic set; examples include wellstudied cases such as sparse vectors (e.g., signal processing, statistics) and lowrank matrices (e.g., control, statistics), as well as several others including sums of a few permutations matrices (e.g., ranked elections, multiobject tracking), lowrank tensors (e.g., computer vision, neuroscience), orthogonal matrices (e.g., machine learning), and atomic measures (e.g., system identification). The convex programming formulation is based on minimizing the norm induced by the convex hull of the atomic set; this norm is referred to as the atomic norm. The facial
Estimation of (near) lowrank matrices with noise and highdimensional scaling
"... We study an instance of highdimensional statistical inference in which the goal is to use N noisy observations to estimate a matrix Θ ∗ ∈ R k×p that is assumed to be either exactly low rank, or “near ” lowrank, meaning that it can be wellapproximated by a matrix with low rank. We consider an Me ..."
Abstract

Cited by 35 (11 self)
 Add to MetaCart
We study an instance of highdimensional statistical inference in which the goal is to use N noisy observations to estimate a matrix Θ ∗ ∈ R k×p that is assumed to be either exactly low rank, or “near ” lowrank, meaning that it can be wellapproximated by a matrix with low rank. We consider an Mestimator based on regularization by the traceornuclearnormovermatrices, andanalyze its performance under highdimensional scaling. We provide nonasymptotic bounds on the Frobenius norm error that hold for a generalclassofnoisyobservationmodels,and apply to both exactly lowrank and approximately lowrank matrices. We then illustrate their consequences for a number of specific learning models, including lowrank multivariate or multitask regression, system identification in vector autoregressive processes, and recovery of lowrank matrices from random projections. Simulations show excellent agreement with the highdimensional scaling of the error predicted by our theory. 1.
Giannakis, “From sparse signals to sparse residuals for robust sensing
 IEEE Trans. Signal Processing
, 2010
"... Abstract—One of the key challenges in sensor networks is the extraction of information by fusing data from a multitude of distinct, but possibly unreliable sensors. Recovering information from the maximum number of dependable sensors while specifying the unreliable ones is critical for robust sensin ..."
Abstract

Cited by 7 (4 self)
 Add to MetaCart
Abstract—One of the key challenges in sensor networks is the extraction of information by fusing data from a multitude of distinct, but possibly unreliable sensors. Recovering information from the maximum number of dependable sensors while specifying the unreliable ones is critical for robust sensing. This sensing task is formulated here as that of finding the maximum number of feasible subsystems of linear equations and proved to be NPhard. Useful links are established with compressive sampling, which aims at recovering vectors that are sparse. In contrast, the signals here are not sparse, but give rise to sparse residuals. Capitalizing on this form of sparsity, four sensing schemes with complementary strengths are developed. The first scheme is a convex relaxation of the original problem expressed as a secondorder cone program (SOCP). It is shown that when the involved sensing matrices are Gaussian and the reliable measurements are sufficiently many, the SOCP can recover the optimal solution with overwhelming probability. The second scheme is obtained by replacing the initial objective function with a concave one. The third and fourth schemes are tailored for noisy sensor data. The noisy case is cast as a combinatorial problem that is subsequently surrogated by a (weighted) SOCP. Interestingly, the derived cost functions fall into the framework of robust multivariate linear regression, while an efficient blockcoordinate descent algorithm is developed for their minimization. The robust sensing capabilities of all schemes are verified by simulated tests. Index Terms—Compressive sampling, convex relaxation, coordinate descent, multivariate regression, robust methods, sensor networks.
LOWRANK MATRIX RECOVERY VIA ITERATIVELY REWEIGHTED LEAST SQUARES MINIMIZATION
"... Abstract. We present and analyze an efficient implementation of an iteratively reweighted least squares algorithm for recovering a matrix from a small number of linear measurements. The algorithm is designed for the simultaneous promotion of both a minimal nuclear norm and an approximatively lowran ..."
Abstract

Cited by 6 (1 self)
 Add to MetaCart
Abstract. We present and analyze an efficient implementation of an iteratively reweighted least squares algorithm for recovering a matrix from a small number of linear measurements. The algorithm is designed for the simultaneous promotion of both a minimal nuclear norm and an approximatively lowrank solution. Under the assumption that the linear measurements fulfill a suitable generalization of the Null Space Property known in the context of compressed sensing, the algorithm is guaranteed to recover iteratively any matrix with an error of the order of the best krank approximation. In certain relevant cases, for instance for the matrix completion problem, our version of this algorithm can take advantage of the Woodbury matrix identity, which allows to expedite the solution of the least squares problems required at each iteration. We present numerical experiments which confirm the robustness of the algorithm for the solution of matrix completion problems, and demonstrate its competitiveness with respect to other techniques proposed recently in the literature. AMS subject classification: 65J22, 65K10, 52A41, 49M30. Key Words: lowrank matrix recovery, iteratively reweighted least squares, matrix completion.
Reweighted ℓ1Minimization for Sparse Solutions to Underdetermined Linear Systems
, 2011
"... Abstract. Numerical experiments have indicated that the reweighted ℓ1minimization performs exceptionally well in locating sparse solutions of underdetermined linear systems of equations. Thus it is important to carry out a further investigation of this class of methods. In this paper, we point out ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
Abstract. Numerical experiments have indicated that the reweighted ℓ1minimization performs exceptionally well in locating sparse solutions of underdetermined linear systems of equations. Thus it is important to carry out a further investigation of this class of methods. In this paper, we point out that reweighted ℓ1methods are intrinsically associated with the minimization of the socalled merit functions for sparsity, which are essentially concave approximations to the cardinality function. Based on this observation, we further show that a family of reweighted ℓ1algorithms can be systematically derived from the perspective of concave optimization through the linearization technique. In order to conduct a unified convergence analysis for this family of algorithms, we introduce the concept of Range Space Property (RSP) of matrices, and prove that if AT has this property, the reweighted ℓ1algorithms can find a sparse solution to the underdetermined linear system provided that the merit function for sparsity is properly chosen. In particular, some convergence conditions (based on the RSP) for CandèsWakinBoyd method and the recent ℓpquasinormbased reweighted ℓ1minimization can be obtained as special cases of the general framework. Key words. Reweighted ℓ1minimization, sparse solution, underdetermined linear system, concave minimization, merit function for sparsity, compressive sensing.
Strongly convex programming for exact matrix completion and robust principal component analysis, Inverse Probl
 Imaging
"... The common task in matrix completion (MC) and robust principle component analysis (RPCA) is to recover a lowrank matrix from a given data matrix. These problems gained great attention from various areas in applied sciences recently, especially after the publication of the pioneering works of Candès ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
The common task in matrix completion (MC) and robust principle component analysis (RPCA) is to recover a lowrank matrix from a given data matrix. These problems gained great attention from various areas in applied sciences recently, especially after the publication of the pioneering works of Candès et al.. One fundamental result in MC and RPCA is that nuclear norm based convex optimizations lead to the exact lowrank matrix recovery under suitable conditions. In this paper, we extend this result by showing that strongly convex optimizations can guarantee the exact lowrank matrix recovery as well. The result in this paper not only provides sufficient conditions under which the strongly convex models lead to the exact lowrank matrix recovery, but also guides us on how to choose suitable parameters in practical algorithms. 1
On unified view of nullspacetype conditions for recoveries associated with general sparsity structures
, 1207
"... Wediscussageneralnotionof“sparsitystructure”andassociatedrecoveries ofasparsesignal from its linear image of reduced dimension possibly corrupted with noise. Our approach allows for unified treatment of (a) the “usual sparsity ” and “usual ℓ1 recovery, ” (b) blocksparsity with possibly overlapping ..."
Abstract
 Add to MetaCart
Wediscussageneralnotionof“sparsitystructure”andassociatedrecoveries ofasparsesignal from its linear image of reduced dimension possibly corrupted with noise. Our approach allows for unified treatment of (a) the “usual sparsity ” and “usual ℓ1 recovery, ” (b) blocksparsity with possibly overlapping blocks and associated blockℓ1 recovery, and (c) lowrankoriented recovery by nuclear norm minimization. The proposed recovery routines are natural extensions of theusualℓ1 minimization usedinCompressedSensing. Specifically, wepresent nullspacetype sufficient conditions for the recovery to be precise on sparse signals in the noiseless case. Then we derive error bounds for imperfect (nearly sparse signal, presence of observation noise, etc.) recovery under these conditions. In all of these cases, we present efficiently verifiable sufficient conditions for the validity of the associated nullspace properties. 1
Acknowledgements
, 2011
"... When I was a master’s student in Canada, I always wished to closely collaborate with John Doyle and Richard Murray, who were two of the worldfamous professors in the area of control. My dream came true when I met John Doyle in the 2006 IEEE Conference on Decision and Control. Indeed, he invited me ..."
Abstract
 Add to MetaCart
When I was a master’s student in Canada, I always wished to closely collaborate with John Doyle and Richard Murray, who were two of the worldfamous professors in the area of control. My dream came true when I met John Doyle in the 2006 IEEE Conference on Decision and Control. Indeed, he invited me to give a visit to Caltech in order to discuss my research interests with him and Richard Murray, which led to my admission to the interdisciplinary department of Control & Dynamical Systems. It has been a great honor for me to have John Doyle as my primary PhD advisor and Richard Murray as my PhD coadvisor. I owe my deepest gratitude to John Doyle and Richard Murray for their guidance, support and encouragement. They taught me how to conduct highimpact interdisciplinary research and inspired me to work on a broad range of projects. I was always amazed by their invaluable insights, ideas and suggestions. I am forever indebted to my advisors for shaping my academic life. I also feel fortunate to have had the opportunity to closely collaborate with Steven Low during my PhD studies. I would like to thank him for motivating me to work on the two important areas of energy systems and communication networks. He was a constant source of inspirational ideas and discussions. Half of this dissertation has been developed under the great supervision of Steven Low. I would like to thank my old friend, Aydin Babakhani, and his former advisor, Ali Hajimiri, for introducing me to the interesting world of electrical circuits and antenna devices. Part II of this dissertation is the result of my collaboration with Aydin and Ali. I