Results 1  10
of
22
A unified framework for highdimensional analysis of Mestimators with decomposable regularizers
"... ..."
The Convex Geometry of Linear Inverse Problems
, 2010
"... In applications throughout science and engineering one is often faced with the challenge of solving an illposed inverse problem, where the number of available measurements is smaller than the dimension of the model to be estimated. However in many practical situations of interest, models are constr ..."
Abstract

Cited by 44 (11 self)
 Add to MetaCart
In applications throughout science and engineering one is often faced with the challenge of solving an illposed inverse problem, where the number of available measurements is smaller than the dimension of the model to be estimated. However in many practical situations of interest, models are constrained structurally so that they only have a few degrees of freedom relative to their ambient dimension. This paper provides a general framework to convert notions of simplicity into convex penalty functions, resulting in convex optimization solutions to linear, underdetermined inverse problems. The class of simple models considered are those formed as the sum of a few atoms from some (possibly infinite) elementary atomic set; examples include wellstudied cases such as sparse vectors (e.g., signal processing, statistics) and lowrank matrices (e.g., control, statistics), as well as several others including sums of a few permutations matrices (e.g., ranked elections, multiobject tracking), lowrank tensors (e.g., computer vision, neuroscience), orthogonal matrices (e.g., machine learning), and atomic measures (e.g., system identification). The convex programming formulation is based on minimizing the norm induced by the convex hull of the atomic set; this norm is referred to as the atomic norm. The facial
Estimation of (near) lowrank matrices with noise and highdimensional scaling
"... We study an instance of highdimensional statistical inference in which the goal is to use N noisy observations to estimate a matrix Θ ∗ ∈ R k×p that is assumed to be either exactly low rank, or “near ” lowrank, meaning that it can be wellapproximated by a matrix with low rank. We consider an Me ..."
Abstract

Cited by 36 (11 self)
 Add to MetaCart
(Show Context)
We study an instance of highdimensional statistical inference in which the goal is to use N noisy observations to estimate a matrix Θ ∗ ∈ R k×p that is assumed to be either exactly low rank, or “near ” lowrank, meaning that it can be wellapproximated by a matrix with low rank. We consider an Mestimator based on regularization by the traceornuclearnormovermatrices, andanalyze its performance under highdimensional scaling. We provide nonasymptotic bounds on the Frobenius norm error that hold for a generalclassofnoisyobservationmodels,and apply to both exactly lowrank and approximately lowrank matrices. We then illustrate their consequences for a number of specific learning models, including lowrank multivariate or multitask regression, system identification in vector autoregressive processes, and recovery of lowrank matrices from random projections. Simulations show excellent agreement with the highdimensional scaling of the error predicted by our theory. 1.
Giannakis, “From sparse signals to sparse residuals for robust sensing
 IEEE Trans. Signal Processing
, 2010
"... Abstract—One of the key challenges in sensor networks is the extraction of information by fusing data from a multitude of distinct, but possibly unreliable sensors. Recovering information from the maximum number of dependable sensors while specifying the unreliable ones is critical for robust sensin ..."
Abstract

Cited by 7 (4 self)
 Add to MetaCart
(Show Context)
Abstract—One of the key challenges in sensor networks is the extraction of information by fusing data from a multitude of distinct, but possibly unreliable sensors. Recovering information from the maximum number of dependable sensors while specifying the unreliable ones is critical for robust sensing. This sensing task is formulated here as that of finding the maximum number of feasible subsystems of linear equations and proved to be NPhard. Useful links are established with compressive sampling, which aims at recovering vectors that are sparse. In contrast, the signals here are not sparse, but give rise to sparse residuals. Capitalizing on this form of sparsity, four sensing schemes with complementary strengths are developed. The first scheme is a convex relaxation of the original problem expressed as a secondorder cone program (SOCP). It is shown that when the involved sensing matrices are Gaussian and the reliable measurements are sufficiently many, the SOCP can recover the optimal solution with overwhelming probability. The second scheme is obtained by replacing the initial objective function with a concave one. The third and fourth schemes are tailored for noisy sensor data. The noisy case is cast as a combinatorial problem that is subsequently surrogated by a (weighted) SOCP. Interestingly, the derived cost functions fall into the framework of robust multivariate linear regression, while an efficient blockcoordinate descent algorithm is developed for their minimization. The robust sensing capabilities of all schemes are verified by simulated tests. Index Terms—Compressive sampling, convex relaxation, coordinate descent, multivariate regression, robust methods, sensor networks.
LOWRANK MATRIX RECOVERY VIA ITERATIVELY REWEIGHTED LEAST SQUARES MINIMIZATION
"... Abstract. We present and analyze an efficient implementation of an iteratively reweighted least squares algorithm for recovering a matrix from a small number of linear measurements. The algorithm is designed for the simultaneous promotion of both a minimal nuclear norm and an approximatively lowran ..."
Abstract

Cited by 6 (1 self)
 Add to MetaCart
Abstract. We present and analyze an efficient implementation of an iteratively reweighted least squares algorithm for recovering a matrix from a small number of linear measurements. The algorithm is designed for the simultaneous promotion of both a minimal nuclear norm and an approximatively lowrank solution. Under the assumption that the linear measurements fulfill a suitable generalization of the Null Space Property known in the context of compressed sensing, the algorithm is guaranteed to recover iteratively any matrix with an error of the order of the best krank approximation. In certain relevant cases, for instance for the matrix completion problem, our version of this algorithm can take advantage of the Woodbury matrix identity, which allows to expedite the solution of the least squares problems required at each iteration. We present numerical experiments which confirm the robustness of the algorithm for the solution of matrix completion problems, and demonstrate its competitiveness with respect to other techniques proposed recently in the literature. AMS subject classification: 65J22, 65K10, 52A41, 49M30. Key Words: lowrank matrix recovery, iteratively reweighted least squares, matrix completion.
Reweighted ℓ1Minimization for Sparse Solutions to Underdetermined Linear Systems
, 2011
"... Abstract. Numerical experiments have indicated that the reweighted ℓ1minimization performs exceptionally well in locating sparse solutions of underdetermined linear systems of equations. Thus it is important to carry out a further investigation of this class of methods. In this paper, we point out ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
(Show Context)
Abstract. Numerical experiments have indicated that the reweighted ℓ1minimization performs exceptionally well in locating sparse solutions of underdetermined linear systems of equations. Thus it is important to carry out a further investigation of this class of methods. In this paper, we point out that reweighted ℓ1methods are intrinsically associated with the minimization of the socalled merit functions for sparsity, which are essentially concave approximations to the cardinality function. Based on this observation, we further show that a family of reweighted ℓ1algorithms can be systematically derived from the perspective of concave optimization through the linearization technique. In order to conduct a unified convergence analysis for this family of algorithms, we introduce the concept of Range Space Property (RSP) of matrices, and prove that if AT has this property, the reweighted ℓ1algorithms can find a sparse solution to the underdetermined linear system provided that the merit function for sparsity is properly chosen. In particular, some convergence conditions (based on the RSP) for CandèsWakinBoyd method and the recent ℓpquasinormbased reweighted ℓ1minimization can be obtained as special cases of the general framework. Key words. Reweighted ℓ1minimization, sparse solution, underdetermined linear system, concave minimization, merit function for sparsity, compressive sensing.
Strongly convex programming for exact matrix completion and robust principal component analysis, Inverse Probl
 Imaging
"... The common task in matrix completion (MC) and robust principle component analysis (RPCA) is to recover a lowrank matrix from a given data matrix. These problems gained great attention from various areas in applied sciences recently, especially after the publication of the pioneering works of Candès ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
The common task in matrix completion (MC) and robust principle component analysis (RPCA) is to recover a lowrank matrix from a given data matrix. These problems gained great attention from various areas in applied sciences recently, especially after the publication of the pioneering works of Candès et al.. One fundamental result in MC and RPCA is that nuclear norm based convex optimizations lead to the exact lowrank matrix recovery under suitable conditions. In this paper, we extend this result by showing that strongly convex optimizations can guarantee the exact lowrank matrix recovery as well. The result in this paper not only provides sufficient conditions under which the strongly convex models lead to the exact lowrank matrix recovery, but also guides us on how to choose suitable parameters in practical algorithms. 1
MATRIX COMPLETION MODELS WITH FIXED BASIS COEFFICIENTS AND RANK REGULARIZED PROBLEMS WITH HARD CONSTRAINTS
, 2013
"... ..."
(Show Context)
Submitted to the Annals of Statistics ESTIMATION OF (NEAR) LOWRANK MATRICES WITH NOISE AND HIGHDIMENSIONAL SCALING ∗
"... We study an instance of highdimensional inference in which the goal is to estimate a matrix Θ ∗ ∈ R m1×m2 on the basis of N noisy observations. The unknown matrix Θ ∗ is assumed to be either exactly low rank, or “near ” lowrank, meaning that it can be wellapproximated by a matrix with low rank. W ..."
Abstract
 Add to MetaCart
(Show Context)
We study an instance of highdimensional inference in which the goal is to estimate a matrix Θ ∗ ∈ R m1×m2 on the basis of N noisy observations. The unknown matrix Θ ∗ is assumed to be either exactly low rank, or “near ” lowrank, meaning that it can be wellapproximated by a matrix with low rank. We consider a standard Mestimator based on regularization by the nuclear or trace norm over matrices, and analyze its performance under highdimensional scaling. We define the notion of restricted strong convexity (RSC) for the loss function, and use it to derive nonasymptotic bounds on the Frobenius norm error that hold for a general class of noisy observation models, and apply to both exactly lowrank and approximately low rank matrices. We then illustrate consequences of this general theory for a number of specific matrix models, including lowrank multivariate or multitask regression, system identification in