Results 1  10
of
16
Exact Matrix Completion via Convex Optimization
, 2008
"... We consider a problem of considerable practical interest: the recovery of a data matrix from a sampling of its entries. Suppose that we observe m entries selected uniformly at random from a matrix M. Can we complete the matrix and recover the entries that we have not seen? We show that one can perfe ..."
Abstract

Cited by 320 (19 self)
 Add to MetaCart
We consider a problem of considerable practical interest: the recovery of a data matrix from a sampling of its entries. Suppose that we observe m entries selected uniformly at random from a matrix M. Can we complete the matrix and recover the entries that we have not seen? We show that one can perfectly recover most lowrank matrices from what appears to be an incomplete set of entries. We prove that if the number m of sampled entries obeys m ≥ C n 1.2 r log n for some positive numerical constant C, then with very high probability, most n × n matrices of rank r can be perfectly recovered by solving a simple convex optimization program. This program finds the matrix with minimum nuclear norm that fits the data. The condition above assumes that the rank is not too large. However, if one replaces the 1.2 exponent with 1.25, then the result holds for all values of the rank. Similar results hold for arbitrary rectangular matrices as well. Our results are connected with the recent literature on compressed sensing, and show that objects other than signals and images can be perfectly reconstructed from very limited information.
Matrix Completion with Noise
"... On the heels of compressed sensing, a remarkable new field has very recently emerged. This field addresses a broad range of problems of significant practical interest, namely, the recovery of a data matrix from what appears to be incomplete, and perhaps even corrupted, information. In its simplest ..."
Abstract

Cited by 74 (4 self)
 Add to MetaCart
On the heels of compressed sensing, a remarkable new field has very recently emerged. This field addresses a broad range of problems of significant practical interest, namely, the recovery of a data matrix from what appears to be incomplete, and perhaps even corrupted, information. In its simplest form, the problem is to recover a matrix from a small sample of its entries, and comes up in many areas of science and engineering including collaborative filtering, machine learning, control, remote sensing, and computer vision to name a few. This paper surveys the novel literature on matrix completion, which shows that under some suitable conditions, one can recover an unknown lowrank matrix from a nearly minimal set of entries by solving a simple convex optimization problem, namely, nuclearnorm minimization subject to data constraints. Further, this paper introduces novel results showing that matrix completion is provably accurate even when the few observed entries are corrupted with a small amount of noise. A typical result is that one can recover an unknown n × n matrix of low rank r from just about nr log 2 n noisy samples with an error which is proportional to the noise level. We present numerical results which complement our quantitative analysis and show that, in practice, nuclear norm minimization accurately fills in the many missing entries of large lowrank matrices from just a few noisy samples. Some analogies between matrix completion and compressed sensing are discussed throughout.
A simpler approach to matrix completion
 the Journal of Machine Learning Research
"... This paper provides the best bounds to date on the number of randomly sampled entries required to reconstruct an unknown low rank matrix. These results improve on prior work by Candès and Recht [4], Candès and Tao [7], and Keshavan, Montanari, and Oh [18]. The reconstruction is accomplished by minim ..."
Abstract

Cited by 58 (3 self)
 Add to MetaCart
This paper provides the best bounds to date on the number of randomly sampled entries required to reconstruct an unknown low rank matrix. These results improve on prior work by Candès and Recht [4], Candès and Tao [7], and Keshavan, Montanari, and Oh [18]. The reconstruction is accomplished by minimizing the nuclear norm, or sum of the singular values, of the hidden matrix subject to agreement with the provided entries. If the underlying matrix satisfies a certain incoherence condition, then the number of entries required is equal to a quadratic logarithmic factor times the number of parameters in the singular value decomposition. The proof of this assertion is short, self contained, and uses very elementary analysis. The novel techniques herein are based on recent work in quantum information theory.
Solving Degenerate Sparse Polynomial Systems Faster
 Journal of Symbolic Computation
, 1999
"... This paper is dedicated to my son, Victor Lorenzo. ..."
Abstract

Cited by 23 (3 self)
 Add to MetaCart
This paper is dedicated to my son, Victor Lorenzo.
A ZeroTest and an Interpolation Algorithm for the Shifted Sparse Polynomials
"... Recall that a polynomial f 2 F [X1 ; : : : ; Xn ] is t sparse, if f = P ff I X I contains at most t terms. In [BT 88], [GKS 90] (see also [GK 87] and [Ka 89]) the problem of interpolation of tsparse polynomial given by a blackbox for its evaluation has been solved. In this paper we shall ass ..."
Abstract

Cited by 14 (4 self)
 Add to MetaCart
Recall that a polynomial f 2 F [X1 ; : : : ; Xn ] is t sparse, if f = P ff I X I contains at most t terms. In [BT 88], [GKS 90] (see also [GK 87] and [Ka 89]) the problem of interpolation of tsparse polynomial given by a blackbox for its evaluation has been solved. In this paper we shall assume that F is a field of characteristic zero. One can consider a t sparse polynomial as a polynomial represented by a straightline program or an arithmetic circuit of the depth 2 where on the first level there are multiplications with unbounded fanin and on the second level there is an addition with fanin t. In the present paper we consider a generalization of the notion of sparsity, namely we say that a polynomial g(X1 ; : : : ; Xn) 2 F [X1 ; : : : ; Xn ] is shifted tsparse if for a suitable nonsingular n \Theta n matrix A and a vector B the polynomial g(A(X1 ; : : : ; Xn) T + B) is tsparse. One could consider g as being represented by a straightline program of the depth 3 w...
Extremal real algebraic geometry and Adiscriminants
 Moscow Mathematical Journal
, 2007
"... We present a new, far simpler family of counterexamples to Kushnirenko’s Conjecture. Along the way, we illustrate a computerassisted approach to finding sparse polynomial systems with maximally many real roots, thus shedding light on the nature of optimal upper bounds in real fewnomial theory. We ..."
Abstract

Cited by 11 (5 self)
 Add to MetaCart
We present a new, far simpler family of counterexamples to Kushnirenko’s Conjecture. Along the way, we illustrate a computerassisted approach to finding sparse polynomial systems with maximally many real roots, thus shedding light on the nature of optimal upper bounds in real fewnomial theory. We use a powerful recent formula for the Adiscriminant, and give new bounds on the topology of certain Adiscriminant varieties. A consequence of the latter result is a new upper bound on the number of topological types of certain real algebraic sets defined by sparse polynomial equations. 1
Concentrationbased guarantees for lowrank matrix reconstruction
 24th Annual Conference on Learning Theory (COLT
, 2011
"... We consider the problem of approximately reconstructing a partiallyobserved, approximately lowrank matrix. This problem has received much attention lately, mostly using the tracenorm as a surrogate to the rank. Here we study lowrank matrix reconstruction using both the tracenorm, as well as the ..."
Abstract

Cited by 6 (2 self)
 Add to MetaCart
We consider the problem of approximately reconstructing a partiallyobserved, approximately lowrank matrix. This problem has received much attention lately, mostly using the tracenorm as a surrogate to the rank. Here we study lowrank matrix reconstruction using both the tracenorm, as well as the lessstudied maxnorm, and present reconstruction guarantees based on existing analysis on the Rademacher complexity of the unit balls of these norms. We show how these are superior in several ways to recently published guarantees based on specialized analysis.
Bounds on Numbers of Vectors of Multiplicities for Polynomials which are Easy to Compute
 ISSAC
, 2000
"... Let F be an algebraically closed field of zero characteristic, a polynomial ' 2 F[X1 ; : : : ; Xn ] have a multiplicative complexity r and f1 ; : : : ; fk 2 F[X1 ; : : : ; Xn ] be some polynomials of degrees not exceeding d, such that ' = f1 = \Delta \Delta \Delta = fk = 0 has a finite number of roo ..."
Abstract

Cited by 5 (1 self)
 Add to MetaCart
Let F be an algebraically closed field of zero characteristic, a polynomial ' 2 F[X1 ; : : : ; Xn ] have a multiplicative complexity r and f1 ; : : : ; fk 2 F[X1 ; : : : ; Xn ] be some polynomials of degrees not exceeding d, such that ' = f1 = \Delta \Delta \Delta = fk = 0 has a finite number of roots. We show that the number of possible distinct vectors of multiplicities of these roots is small when r; d and k are small. As technical tools we design algorithms which produce Gröbner bases and vectors of multiplicities of the roots for a parametric zerodimensional system. The complexities of these algorithms are singly exponential. We also describe an algorithm for parametric absolute factorization of multivariate polynomials. This algorithm has subexponential complexity in the case of a small (relative to the number of variables) degree of the polynomials.
Toric Generalized Characteristic Polynomials
, 1997
"... . We illustrate an efficient new method for handling polynomial systems with degenerate solution sets. In particular, a corollary of our techniques is a new algorithm to find an isolated point in every excess component of the zero set (over an algebraically closed field) of any n by n system of poly ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
. We illustrate an efficient new method for handling polynomial systems with degenerate solution sets. In particular, a corollary of our techniques is a new algorithm to find an isolated point in every excess component of the zero set (over an algebraically closed field) of any n by n system of polynomial equations. Since we use the sparse resultant, we thus obtain complexity bounds (for converting any input polynomial system into a multilinear factorization problem) which are close to cubic in the degree of the underlying variety  significantly better than previous bounds which were pseudopolynomial in the classical B'ezout bound. By carefully taking into account the underlying toric geometry, we are also able to improve the reliability of certain sparse resultant based algorithms for polynomial system solving. 1. Introduction The rebirth of resultants, especially through the toric 1 resultant [GKZ94], has begun to provide a much needed alternative to Grobner basis methods for ...