Results 1  10
of
85
Robust Principal Component Analysis?
, 2009
"... This paper is about a curious phenomenon. Suppose we have a data matrix, which is the superposition of a lowrank component and a sparse component. Can we recover each component individually? We prove that under some suitable assumptions, it is possible to recover both the lowrank and the sparse co ..."
Abstract

Cited by 160 (7 self)
 Add to MetaCart
This paper is about a curious phenomenon. Suppose we have a data matrix, which is the superposition of a lowrank component and a sparse component. Can we recover each component individually? We prove that under some suitable assumptions, it is possible to recover both the lowrank and the sparse components exactly by solving a very convenient convex program called Principal Component Pursuit; among all feasible decompositions, simply minimize a weighted combination of the nuclear norm and of the ℓ1 norm. This suggests the possibility of a principled approach to robust principal component analysis since our methodology and results assert that one can recover the principal components of a data matrix even though a positive fraction of its entries are arbitrarily corrupted. This extends to the situation where a fraction of the entries are missing as well. We discuss an algorithm for solving this optimization problem, and present applications in the area of video surveillance, where our methodology allows for the detection of objects in a cluttered background, and in the area of face recognition, where it offers a principled way of removing shadows and specularities in images of faces.
Restricted strong convexity and (weighted) matrix completion: Optimal bounds with noise
, 2010
"... We consider the matrix completion problem under a form of row/column weighted entrywise sampling, including the case of uniform entrywise sampling as a special case. We analyze the associated random observation operator, and prove that with high probability, it satisfies a form of restricted strong ..."
Abstract

Cited by 34 (6 self)
 Add to MetaCart
We consider the matrix completion problem under a form of row/column weighted entrywise sampling, including the case of uniform entrywise sampling as a special case. We analyze the associated random observation operator, and prove that with high probability, it satisfies a form of restricted strong convexity with respect to weighted Frobenius norm. Using this property, we obtain as corollaries a number of error bounds on matrix completion in the weighted Frobenius norm under noisy sampling and for both exact and near lowrank matrices. Our results are based on measures of the “spikiness ” and “lowrankness ” of matrices that are less restrictive than the incoherence conditions imposed in previous work. Our technique involves an Mestimator that includes controls on both the rank and spikiness of the solution, and we establish nonasymptotic error bounds in weighted Frobenius norm for recovering matrices lying with ℓq“balls ” of bounded spikiness. Using informationtheoretic methods, we show that no algorithm can achieve better estimates (up to a logarithmic factor) over these same sets, showing that our conditions on matrices and associated rates are essentially optimal.
Onebit compressed sensing by linear programming. Preprint. Available at http://arxiv.org/abs/1109.4299
"... Abstract. We give the first computationally tractable and almost optimal solution to the problem of onebit compressed sensing, showing how to accurately recover an ssparse vector x ∈ R n from the signs of O(s log 2 (n/s)) random linear measurements of x. The recovery is achieved by a simple linear ..."
Abstract

Cited by 21 (3 self)
 Add to MetaCart
Abstract. We give the first computationally tractable and almost optimal solution to the problem of onebit compressed sensing, showing how to accurately recover an ssparse vector x ∈ R n from the signs of O(s log 2 (n/s)) random linear measurements of x. The recovery is achieved by a simple linear program. This result extends to approximately sparse vectors x. Our result is universal in the sense that with high probability, one measurement scheme will successfully recover all sparse vectors simultaneously. The argument is based on solving an equivalent geometric problem on random hyperplane tessellations. 1.
Pseudorandom Functions and Lattices
, 2011
"... We give direct constructions of pseudorandom function (PRF) families based on conjectured hard lattice problems and learning problems. Our constructions are asymptotically efficient and highly parallelizable in a practical sense, i.e., they can be computed by simple, relatively small lowdepth arith ..."
Abstract

Cited by 13 (3 self)
 Add to MetaCart
We give direct constructions of pseudorandom function (PRF) families based on conjectured hard lattice problems and learning problems. Our constructions are asymptotically efficient and highly parallelizable in a practical sense, i.e., they can be computed by simple, relatively small lowdepth arithmetic or boolean circuits (e.g., in NC 1 or even TC 0). In addition, they are the first lowdepth PRFs that have no known attack by efficient quantum algorithms. Central to our results is a new “derandomization ” technique for the learning with errors (LWE) problem which, in effect, generates the error terms deterministically. 1 Introduction and Main Results The past few years have seen significant progress in constructing publickey, identitybased, and homomorphic cryptographic schemes using lattices, e.g., [Reg05, PW08, GPV08, Gen09, CHKP10, ABB10a] and many more. Part of their appeal stems from provable worstcase hardness guarantees (starting with the seminal work of Ajtai [Ajt96]), good asymptotic efficiency and parallelism, and apparent resistance to quantum
Robust 1bit compressed sensing and sparse logistic regression: A convex programming approach. Preprint. Available at http://arxiv.org/abs/1202.1212
"... Abstract. This paper develops theoretical results regarding noisy 1bit compressed sensing and sparse binomial regression. Wedemonstrate thatasingle convexprogram gives anaccurate estimate of the signal, or coefficient vector, for both of these models. We show that an ssparse signal in R n can be a ..."
Abstract

Cited by 11 (2 self)
 Add to MetaCart
Abstract. This paper develops theoretical results regarding noisy 1bit compressed sensing and sparse binomial regression. Wedemonstrate thatasingle convexprogram gives anaccurate estimate of the signal, or coefficient vector, for both of these models. We show that an ssparse signal in R n can be accurately estimated from m = O(slog(n/s)) singlebit measurements using a simple convex program. This remains true even if each measurement bit is flipped with probability nearly 1/2. Worstcase (adversarial) noise can also be accounted for, and uniform results that hold for all sparse inputs are derived as well. In the terminology of sparse logistic regression, we show that O(slog(2n/s)) Bernoulli trials are sufficient to estimate a coefficient vector in R n which is approximately ssparse. Moreover, the same convex program works for virtually all generalized linear models, in which the link function may be unknown. To our knowledge, these are the first results that tie together the theory of sparse logistic regression to 1bit compressed sensing. Our results apply to general signal structures aside from sparsity; one only needs to know the size of the set K where signals reside. The size is given by the mean width of K, a computable quantity whose square serves as a robust extension of the dimension. 1.
Suprema of chaos processes and the restricted isometry property
 Comm. Pure Appl. Math
"... We present a new bound for suprema of a special type of chaos processes indexed by a set of matrices, which is based on a chaining method. As applications we show significantly improved estimates for the restricted isometry constants of partial random circulant matrices and timefrequency structured ..."
Abstract

Cited by 7 (2 self)
 Add to MetaCart
We present a new bound for suprema of a special type of chaos processes indexed by a set of matrices, which is based on a chaining method. As applications we show significantly improved estimates for the restricted isometry constants of partial random circulant matrices and timefrequency structured random matrices. In both cases the required condition on the number m of rows in terms of the sparsity s and the vector length n is m � s log 2 s log 2 n. Key words. Compressive sensing, restricted isometry property, structured random matrices, chaos processes, γ2functionals, generic chaining, partial random circulant matrices, random Gabor synthesis matrices.
Concentration of Measure for Block Diagonal Matrices with Applications to Compressive Sensing
, 2010
"... Theoretical analysis of randomized, compressive operators often depends on a concentration of measure inequality for the operator in question. Typically, such inequalities quantify the likelihood that a random matrix will preserve the norm of a signal after multiplication. When this likelihood is ve ..."
Abstract

Cited by 7 (4 self)
 Add to MetaCart
Theoretical analysis of randomized, compressive operators often depends on a concentration of measure inequality for the operator in question. Typically, such inequalities quantify the likelihood that a random matrix will preserve the norm of a signal after multiplication. When this likelihood is very high for any signal, the random matrices have a variety of known uses in dimensionality reduction and Compressive Sensing. Concentration of measure results are wellestablished for unstructured compressive matrices, populated with independent and identically distributed (i.i.d.) random entries. Many realworld acquisition systems, however, are subject to architectural constraints that make such matrices impractical. In this paper we derive concentration of measure bounds for two types of block diagonal compressive matrices, one in which the blocks along the main diagonal are random and independent, and one in which the blocks are random but equal. For both types of matrices, we show that the likelihood of norm preservation depends on certain properties of the signal being measured, but that for the best case signals, both types of block diagonal matrices can offer concentration performance on par with their unstructured, i.i.d. counterparts. We support our theoretical results with illustrative simulations as well as (analytical and empirical) investigations of several signal classes that are highly amenable to measurement using block diagonal matrices. Finally, we discuss applications of these results in establishing performance guarantees for solving signal processing tasks in the compressed domain (e.g., signal detection), and in establishing the Restricted Isometry Property for the Toeplitz matrices that arise in compressive channel sensing.
Trapdoors for lattices: Simpler, tighter, faster, smaller
 In EUROCRYPT
, 2012
"... We give new methods for generating and using “strong trapdoors ” in cryptographic lattices, which are simultaneously simple, efficient, easy to implement (even in parallel), and asymptotically optimal with very small hidden constants. Our methods involve a new kind of trapdoor, and include specializ ..."
Abstract

Cited by 5 (3 self)
 Add to MetaCart
We give new methods for generating and using “strong trapdoors ” in cryptographic lattices, which are simultaneously simple, efficient, easy to implement (even in parallel), and asymptotically optimal with very small hidden constants. Our methods involve a new kind of trapdoor, and include specialized algorithms for inverting LWE, randomly sampling SIS preimages, and securely delegating trapdoors. These tasks were previously the main bottleneck for a wide range of cryptographic schemes, and our techniques substantially improve upon the prior ones, both in terms of practical performance and quality of the produced outputs. Moreover, the simple structure of the new trapdoor and associated algorithms can be exposed in applications, leading to further simplifications and efficiency improvements. We exemplify the applicability of our methods with new digital signature schemes and CCAsecure encryption schemes, which have better efficiency and security than the previously known latticebased constructions. 1
The computational magic of the ventral stream: sketch of a theory (and why some deep architectures work).
, 2012
"... ..."
Robust Subspace Clustering
, 2013
"... Subspace clustering refers to the task of finding a multisubspace representation that best fits a collection of points taken from a highdimensional space. This paper introduces an algorithm inspired by sparse subspace clustering (SSC) [17] to cluster noisy data, and develops some novel theory demo ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
Subspace clustering refers to the task of finding a multisubspace representation that best fits a collection of points taken from a highdimensional space. This paper introduces an algorithm inspired by sparse subspace clustering (SSC) [17] to cluster noisy data, and develops some novel theory demonstrating its correctness. In particular, the theory uses ideas from geometric functional analysis to show that the algorithm can accurately recover the underlying subspaces under minimal requirements on their orientation, and on the number of samples per subspace. Synthetic as well as real data experiments complement our theoretical study, illustrating our approach and demonstrating its effectiveness.