Results 1  10
of
20
The Power of Convex Relaxation: NearOptimal Matrix Completion
, 2009
"... This paper is concerned with the problem of recovering an unknown matrix from a small fraction of its entries. This is known as the matrix completion problem, and comes up in a great number of applications, including the famous Netflix Prize and other similar questions in collaborative filtering. In ..."
Abstract

Cited by 131 (5 self)
 Add to MetaCart
This paper is concerned with the problem of recovering an unknown matrix from a small fraction of its entries. This is known as the matrix completion problem, and comes up in a great number of applications, including the famous Netflix Prize and other similar questions in collaborative filtering. In general, accurate recovery of a matrix from a small number of entries is impossible; but the knowledge that the unknown matrix has low rank radically changes this premise, making the search for solutions meaningful. This paper presents optimality results quantifying the minimum number of entries needed to recover a matrix of rank r exactly by any method whatsoever (the information theoretic limit). More importantly, the paper shows that, under certain incoherence assumptions on the singular vectors of the matrix, recovery is possible by solving a convenient convex program as soon as the number of entries is on the order of the information theoretic limit (up to logarithmic factors). This convex program simply finds, among all matrices consistent with the observed entries, that with minimum nuclear norm. As an example, we show that on the order of nr log(n) samples are needed to recover a random n × n matrix of rank r by any method, and to be sure, nuclear norm minimization succeeds as soon as the number of entries is of the form nrpolylog(n).
Matrix Completion with Noise
"... On the heels of compressed sensing, a remarkable new field has very recently emerged. This field addresses a broad range of problems of significant practical interest, namely, the recovery of a data matrix from what appears to be incomplete, and perhaps even corrupted, information. In its simplest ..."
Abstract

Cited by 74 (4 self)
 Add to MetaCart
On the heels of compressed sensing, a remarkable new field has very recently emerged. This field addresses a broad range of problems of significant practical interest, namely, the recovery of a data matrix from what appears to be incomplete, and perhaps even corrupted, information. In its simplest form, the problem is to recover a matrix from a small sample of its entries, and comes up in many areas of science and engineering including collaborative filtering, machine learning, control, remote sensing, and computer vision to name a few. This paper surveys the novel literature on matrix completion, which shows that under some suitable conditions, one can recover an unknown lowrank matrix from a nearly minimal set of entries by solving a simple convex optimization problem, namely, nuclearnorm minimization subject to data constraints. Further, this paper introduces novel results showing that matrix completion is provably accurate even when the few observed entries are corrupted with a small amount of noise. A typical result is that one can recover an unknown n × n matrix of low rank r from just about nr log 2 n noisy samples with an error which is proportional to the noise level. We present numerical results which complement our quantitative analysis and show that, in practice, nuclear norm minimization accurately fills in the many missing entries of large lowrank matrices from just a few noisy samples. Some analogies between matrix completion and compressed sensing are discussed throughout.
ANGULAR SYNCHRONIZATION BY EIGENVECTORS AND SEMIDEFINITE PROGRAMMING: ANALYSIS AND APPLICATION TO CLASS AVERAGING IN CRYOELECTRON MICROSCOPY
, 905
"... Abstract. The angular synchronization problem is to obtain an accurate estimation (up to a constant additive phase) for a set of unknown angles θ1,..., θn from m noisy measurements of their offsets θi − θj mod 2π. Of particular interest is angle recovery in the presence of many outlier measurements ..."
Abstract

Cited by 20 (14 self)
 Add to MetaCart
Abstract. The angular synchronization problem is to obtain an accurate estimation (up to a constant additive phase) for a set of unknown angles θ1,..., θn from m noisy measurements of their offsets θi − θj mod 2π. Of particular interest is angle recovery in the presence of many outlier measurements that are uniformly distributed in [0,2π) and carry no information on the true offsets. We introduce an efficient recovery algorithm for the unknown angles from the top eigenvector of a specially designed Hermitian matrix. The eigenvector method is extremely stable and succeeds even when the number of outliers is exceedingly large. For example, we successfully estimate n = 400 angles from a full set of m = `400 ´ offset measurements of which 90 % are outliers in less than a second 2 on a commercial laptop. We use random matrix theory to prove that the eigenvector method q gives
Sensor network localization by eigenvector synchronization over the Euclidean group
 In press
"... We present a new approach to localization of sensors from noisy measurements of a subset of their Euclidean distances. Our algorithm starts by finding, embedding and aligning uniquely realizable subsets of neighboring sensors called patches. In the noisefree case, each patch agrees with its global ..."
Abstract

Cited by 12 (7 self)
 Add to MetaCart
We present a new approach to localization of sensors from noisy measurements of a subset of their Euclidean distances. Our algorithm starts by finding, embedding and aligning uniquely realizable subsets of neighboring sensors called patches. In the noisefree case, each patch agrees with its global positioning up to an unknown rigid motion of translation, rotation and possibly reflection. The reflections and rotations are estimated using the recently developed eigenvector synchronization algorithm, while the translations are estimated by solving an overdetermined linear system. The algorithm is scalable as the number of nodes increases, and can be implemented in a distributed fashion. Extensive numerical experiments show that it compares favorably to other existing algorithms in terms of robustness to noise, sparse connectivity and running time. While our approach is applicable to higher dimensions, in the current paper we focus on the two dimensional case.
Distributed sensor network localization from local connectivity : performance analysis for the HopTerrain algorithm
 in SIGMETRICS’10: Proceedings of the 2010 ACM SIGMETRICS International Conference on Measurement and Modeling of Computer Systems
, 2010
"... Sensor localization from only connectivity information is a highly challenging problem. To this end, our result for the first time establishes an analytic bound on the performance of the popular MDSMAP algorithm based on multidimensional scaling. For a network consisting of n sensors positioned ran ..."
Abstract

Cited by 11 (6 self)
 Add to MetaCart
Sensor localization from only connectivity information is a highly challenging problem. To this end, our result for the first time establishes an analytic bound on the performance of the popular MDSMAP algorithm based on multidimensional scaling. For a network consisting of n sensors positioned randomly on a unit square and a given radio range r = o(1), we show that resulting error is bounded, decreasing at a rate that is inversely proportional to r, when only connectivity information is given. The same bound holds for the rangebased model, when we have an approximate measurements for the distances, and the same algorithm can be applied without any modification. 1
An AsRigidAsPossible Approach to Sensor Network Localization
"... We present a novel approach to localization of sensors in a network given a subset of noisy intersensor distances. The algorithm is based on “stitching” together local structures by solving an optimization problem requiring the structures to fit together in an “AsRigidAsPossible ” manner, hence ..."
Abstract

Cited by 10 (3 self)
 Add to MetaCart
We present a novel approach to localization of sensors in a network given a subset of noisy intersensor distances. The algorithm is based on “stitching” together local structures by solving an optimization problem requiring the structures to fit together in an “AsRigidAsPossible ” manner, hence the name ARAP. The local structures consist of reference “patches” and reference triangles, both obtained from intersensor distances. We elaborate on the relationship between the ARAP algorithm and other stateoftheart algorithms, and provide experimental results demonstrating that ARAP is significantly less sensitive to sparse connectivity and measurement noise. We also show how ARAP may be distributed.
Sensor Map Discovery for Developing Robots
"... Modern mobile robots navigate uncertain environments using complex compositions of camera, laser, and sonar sensor data. Manual calibration of these sensors is a tedious process that involves determining sensor behavior, geometry and location through model specification and system identification. In ..."
Abstract

Cited by 8 (1 self)
 Add to MetaCart
Modern mobile robots navigate uncertain environments using complex compositions of camera, laser, and sonar sensor data. Manual calibration of these sensors is a tedious process that involves determining sensor behavior, geometry and location through model specification and system identification. Instead, we seek to automate the construction of sensor model geometry by mining uninterpreted sensor streams for regularities. Manifold learning methods are powerful techniques for deriving sensor structure from streams of sensor data. In recent years, the proliferation of manifold learning algorithms has led to a variety of choices for autonomously generating models of sensor geometry. We present a series of comparisons between different manifold learning methods for discovering sensor geometry for the specific case of a mobile robot with a variety of sensors. We also explore the effect of control laws and sensor boundary size on the efficacy of manifold learning approaches. We find that ”motor babbling ” control laws generate better geometric sensor maps than midline or wall following control laws and identify a novel method for distinguishing boundary sensor elements. We also present a new learning method, sensorimotor embedding, that takes advantage of the controllable nature of robots to build sensor maps.
Uniqueness of lowrank matrix completion by rigidity theory
, 2009
"... Abstract. The problem of completing a lowrank matrix from a subset of its entries is often encountered in the analysis of incomplete data sets exhibiting an underlying factor model with applications in collaborative filtering, computer vision and control. Most recent work had been focused on constr ..."
Abstract

Cited by 6 (1 self)
 Add to MetaCart
Abstract. The problem of completing a lowrank matrix from a subset of its entries is often encountered in the analysis of incomplete data sets exhibiting an underlying factor model with applications in collaborative filtering, computer vision and control. Most recent work had been focused on constructing efficient algorithms for exact or approximate recovery of the missing matrix entries and proving lower bounds for the number of known entries that guarantee a successful recovery with high probability. A related problem from both the mathematical and algorithmic point of view is the distance geometry problem of realizing points in a Euclidean space from a given subset of their pairwise distances. Rigidity theory answers basic questions regarding the uniqueness of the realization satisfying a given partial set of distances. We observe that basic ideas and tools of rigidity theory can be adapted to determine uniqueness of lowrank matrix completion, where inner products play the role that distances play in rigidity theory. This observation leads to an efficient randomized algorithm for testing both local and global unique completion. Crucial to our analysis is a new matrix, which we call the completion matrix, that serves as the analogue of the rigidity matrix. Key words. Low rank matrices, missing values, rigidity theory, rigid graphs, iterative methods.
Localization from Incomplete Noisy Distance Measurements
"... Abstract—We consider the problem of positioning a cloud of points in the Euclidean space R d, from noisy measurements of a subset of pairwise distances. This task has applications in various areas, such as sensor network localizations, NMR spectroscopy of proteins, and molecular conformation. Also, ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
Abstract—We consider the problem of positioning a cloud of points in the Euclidean space R d, from noisy measurements of a subset of pairwise distances. This task has applications in various areas, such as sensor network localizations, NMR spectroscopy of proteins, and molecular conformation. Also, it is closely related to dimensionality reduction problems and manifold learning, where the goal is to learn the underlying global geometry of a data set using measured local (or partial) metric information. Here we propose a reconstruction algorithm based on a semidefinite programming approach. For a random geometric graph model and uniformly bounded noise, we provide a precise characterization of the algorithm’s performance: In the noiseless case, we find a radius r0 beyond which the algorithm reconstructs the exact positions (up to rigid transformations). In the presence of noise, we obtain upper and lower bounds on the reconstruction error that match up to a factor that depends only on the dimension d, and the average degree of the nodes in the graph. I.
Fast Graph Laplacian Regularized Kernel Learning via Semidefinite–Quadratic–Linear Programming
"... Kernel learning is a powerful framework for nonlinear data modeling. Using the kernel trick, a number of problems have been formulated as semidefinite programs (SDPs). These include Maximum Variance Unfolding (MVU) (Weinberger et al., 2004) in nonlinear dimensionality reduction, and Pairwise Constra ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
Kernel learning is a powerful framework for nonlinear data modeling. Using the kernel trick, a number of problems have been formulated as semidefinite programs (SDPs). These include Maximum Variance Unfolding (MVU) (Weinberger et al., 2004) in nonlinear dimensionality reduction, and Pairwise Constraint Propagation (PCP) (Li et al., 2008) in constrained clustering. Although in theory SDPs can be efficiently solved, the high computational complexity incurred in numerically processing the huge linear matrix inequality constraints has rendered the SDP approach unscalable. In this paper, we show that a large class of kernel learning problems can be reformulated as semidefinitequadraticlinear programs (SQLPs), which only contain a simple positive semidefinite constraint, a secondorder cone constraint and a number of linear constraints. These constraints are much easier to process numerically, and the gain in speedup over previous approaches is at least of the order m 2.5, where m is the matrix dimension. Experimental results are also presented to show the superb computational efficiency of our approach. 1