Results 1  10
of
49
A unified framework for highdimensional analysis of Mestimators with decomposable regularizers
"... ..."
Stable principal component pursuit
 In Proc. of International Symposium on Information Theory
, 2010
"... We consider the problem of recovering a target matrix that is a superposition of lowrank and sparse components, from a small set of linear measurements. This problem arises in compressed sensing of structured highdimensional signals such as videos and hyperspectral images, as well as in the analys ..."
Abstract

Cited by 94 (3 self)
 Add to MetaCart
We consider the problem of recovering a target matrix that is a superposition of lowrank and sparse components, from a small set of linear measurements. This problem arises in compressed sensing of structured highdimensional signals such as videos and hyperspectral images, as well as in the analysis of transformation invariant lowrank structure recovery. We analyze the performance of the natural convex heuristic for solving this problem, under the assumption that measurements are chosen uniformly at random. We prove that this heuristic exactly recovers lowrank and sparse terms, provided the number of observations exceeds the number of intrinsic degrees of freedom of the component signals by a polylogarithmic factor. Our analysis introduces several ideas that may be of independent interest for the more general problem of compressed sensing and decomposing superpositions of multiple structured signals. 1
Noisy matrix decomposition via convexrelaxation: Optimal rates in high dimensions
 Annals of Statistics,40(2):1171
"... We analyze a class of estimators based on convex relaxation for solving highdimensional matrix decomposition problems. The observations are noisy realizations of a linear transformation X of the sum of an (approximately) low rank matrix � ⋆ with a second matrix Ɣ ⋆ endowed with a complementary for ..."
Abstract

Cited by 63 (11 self)
 Add to MetaCart
We analyze a class of estimators based on convex relaxation for solving highdimensional matrix decomposition problems. The observations are noisy realizations of a linear transformation X of the sum of an (approximately) low rank matrix � ⋆ with a second matrix Ɣ ⋆ endowed with a complementary form of lowdimensional structure; this setup includes many statistical models of interest, including factor analysis, multitask regression and robust covariance estimation. We derive a general theorem that bounds the Frobenius norm error for an estimate of the pair ( � ⋆,Ɣ ⋆ ) obtained by solving a convex optimization problem that combines the nuclear norm with a general decomposable regularizer. Our results use a “spikiness ” condition that is related to, but milder than, singular vector incoherence. We specialize our general result to two cases that have been studied in past work: low rank plus an entrywise sparse matrix, and low rank plus a columnwise sparse matrix. For both models, our theory yields nonasymptotic Frobenius error bounds for both deterministic and stochastic noise matrices, and applies to matrices � ⋆ that can be exactly or approximately low rank, and matrices Ɣ ⋆ that can be exactly or approximately sparse. Moreover, for the case of stochastic noise matrices and the identity observation operator, we establish matching lower bounds on the minimax error. The sharpness of our nonasymptotic predictions is confirmed by numerical simulations. 1. Introduction. The
Optimal power flow over tree networks
 PROCEEDINGS OF THE FORTHNINTH ANNUAL ALLERTON CONFERENCE
, 2011
"... The optimal power flow (OPF) problem is critical to power system operation but it is generally nonconvex and therefore hard to solve. Recently, a sufficient condition has been found under which OPF has zero duality gap, which means that its solution can be computed efficiently by solving the conve ..."
Abstract

Cited by 28 (13 self)
 Add to MetaCart
(Show Context)
The optimal power flow (OPF) problem is critical to power system operation but it is generally nonconvex and therefore hard to solve. Recently, a sufficient condition has been found under which OPF has zero duality gap, which means that its solution can be computed efficiently by solving the convex dual problem. In this paper we simplify this sufficient condition through a reformulation of the problem and prove that the condition is always satisfied for a tree network provided we allow oversatisfaction of load. The proof, cast as a complex semidefinite program, makes use of the fact that if the underlying graph of an n n Hermitian positive semidefinite matrix is a tree, then the matrix has rank at least n  1.
Quadratically constrained quadratic programs on acyclic graphs with application to power flow
, 2013
"... ..."
Recursive robust pca or recursive sparse recovery in large but structured noise
 in IEEE Intl. Symp. on Information Theory (ISIT
, 2013
"... This Dissertation is brought to you for free and open access by the Graduate College at Digital Repository @ Iowa State University. It has been accepted for inclusion in Graduate Theses and Dissertations by an authorized administrator of Digital Repository @ Iowa State University. For more informati ..."
Abstract

Cited by 22 (17 self)
 Add to MetaCart
(Show Context)
This Dissertation is brought to you for free and open access by the Graduate College at Digital Repository @ Iowa State University. It has been accepted for inclusion in Graduate Theses and Dissertations by an authorized administrator of Digital Repository @ Iowa State University. For more information, please contact
ROBUST COMPUTATION OF LINEAR MODELS, OR HOW TO FIND A NEEDLE IN A HAYSTACK
"... Abstract. Consider a dataset of vectorvalued observations that consists of a modest number of noisy inliers, which are explained well by a lowdimensional subspace, along with a large number of outliers, which have no linear structure. This work describes a convex optimization problem, called reape ..."
Abstract

Cited by 18 (5 self)
 Add to MetaCart
(Show Context)
Abstract. Consider a dataset of vectorvalued observations that consists of a modest number of noisy inliers, which are explained well by a lowdimensional subspace, along with a large number of outliers, which have no linear structure. This work describes a convex optimization problem, called reaper, that can reliably fit a lowdimensional model to this type of data. The paper provides an efficient algorithm for solving the reaper problem, and it documents numerical experiments which confirm that reaper can dependably find linear structure in synthetic and natural data. In addition, when the inliers are contained in a lowdimensional subspace, there is a rigorous theory that describes when reaper can recover the subspace exactly. 1.
1 OutlierRobust PCA: The High Dimensional Case
"... Principal Component Analysis plays a central role in statistics, engineering and science. Because of the prevalence of corrupted data in realworld applications, much research has focused on developing robust algorithms. Perhaps surprisingly, these algorithms are unequipped – indeed, unable – to dea ..."
Abstract

Cited by 12 (6 self)
 Add to MetaCart
(Show Context)
Principal Component Analysis plays a central role in statistics, engineering and science. Because of the prevalence of corrupted data in realworld applications, much research has focused on developing robust algorithms. Perhaps surprisingly, these algorithms are unequipped – indeed, unable – to deal with outliers in the high dimensional setting where the number of observations is of the same magnitude as the number of variables of each observation, and the data set contains some (arbitrarily) corrupted observations. We propose a Highdimensional Robust Principal Component Analysis (HRPCA) algorithm that is as efficient as PCA, robust to contaminated points, and easily kernelizable. In particular, our algorithm achieves maximal robustness – it has a breakdown point of 50 % (the best possible) while all existing algorithms have a breakdown point of zero. Moreover, our algorithm recovers the optimal solution exactly in the case where the number of corrupted points grows sub linearly in the dimension.
An online algorithm for separating sparse and lowdimensional signal sequences from their sum
 IEEE Trans. Signal Process
"... Abstract—This paper designs and extensively evaluates an online algorithm, called practical recursive projected compressive sensing (PracReProCS), for recovering a time sequence of sparse vectors and a time sequence of dense vectors from their sum, , when the ’s lie in a slowly changing lowdimens ..."
Abstract

Cited by 9 (7 self)
 Add to MetaCart
(Show Context)
Abstract—This paper designs and extensively evaluates an online algorithm, called practical recursive projected compressive sensing (PracReProCS), for recovering a time sequence of sparse vectors and a time sequence of dense vectors from their sum, , when the ’s lie in a slowly changing lowdimensional subspace of the full space. A key application where this problem occurs is in realtime video layering where the goal is to separate a video sequence into a slowly changing background sequence and a sparse foreground sequence that consists of one or more moving regions/objects onthefly. PracReProCS is a practical modification of its theoretical counterpart which was analyzed in our recent work. Extension to the undersampled case is also developed. Extensive experimental comparisons demonstrating the advantage of the approach for both simulated and real videos, over existing batch and recursive methods, are shown. Index Terms—Online robust PCA, recursive sparse recovery, large but structured noise, compressed sensing. I.