Results 1  10
of
59
The Power of Convex Relaxation: NearOptimal Matrix Completion
, 2009
"... This paper is concerned with the problem of recovering an unknown matrix from a small fraction of its entries. This is known as the matrix completion problem, and comes up in a great number of applications, including the famous Netflix Prize and other similar questions in collaborative filtering. In ..."
Abstract

Cited by 131 (5 self)
 Add to MetaCart
This paper is concerned with the problem of recovering an unknown matrix from a small fraction of its entries. This is known as the matrix completion problem, and comes up in a great number of applications, including the famous Netflix Prize and other similar questions in collaborative filtering. In general, accurate recovery of a matrix from a small number of entries is impossible; but the knowledge that the unknown matrix has low rank radically changes this premise, making the search for solutions meaningful. This paper presents optimality results quantifying the minimum number of entries needed to recover a matrix of rank r exactly by any method whatsoever (the information theoretic limit). More importantly, the paper shows that, under certain incoherence assumptions on the singular vectors of the matrix, recovery is possible by solving a convenient convex program as soon as the number of entries is on the order of the information theoretic limit (up to logarithmic factors). This convex program simply finds, among all matrices consistent with the observed entries, that with minimum nuclear norm. As an example, we show that on the order of nr log(n) samples are needed to recover a random n × n matrix of rank r by any method, and to be sure, nuclear norm minimization succeeds as soon as the number of entries is of the form nrpolylog(n).
Matrix Completion with Noise
"... On the heels of compressed sensing, a remarkable new field has very recently emerged. This field addresses a broad range of problems of significant practical interest, namely, the recovery of a data matrix from what appears to be incomplete, and perhaps even corrupted, information. In its simplest ..."
Abstract

Cited by 74 (4 self)
 Add to MetaCart
On the heels of compressed sensing, a remarkable new field has very recently emerged. This field addresses a broad range of problems of significant practical interest, namely, the recovery of a data matrix from what appears to be incomplete, and perhaps even corrupted, information. In its simplest form, the problem is to recover a matrix from a small sample of its entries, and comes up in many areas of science and engineering including collaborative filtering, machine learning, control, remote sensing, and computer vision to name a few. This paper surveys the novel literature on matrix completion, which shows that under some suitable conditions, one can recover an unknown lowrank matrix from a nearly minimal set of entries by solving a simple convex optimization problem, namely, nuclearnorm minimization subject to data constraints. Further, this paper introduces novel results showing that matrix completion is provably accurate even when the few observed entries are corrupted with a small amount of noise. A typical result is that one can recover an unknown n × n matrix of low rank r from just about nr log 2 n noisy samples with an error which is proportional to the noise level. We present numerical results which complement our quantitative analysis and show that, in practice, nuclear norm minimization accurately fills in the many missing entries of large lowrank matrices from just a few noisy samples. Some analogies between matrix completion and compressed sensing are discussed throughout.
Exploiting sparsity in SDP relaxation for sensor network localization
 SIAM J. Optim
, 2009
"... Abstract. A sensor network localization problem can be formulated as a quadratic optimization problem (QOP). For quadratic optimization problems, semidefinite programming (SDP) relaxation by Lasserre with relaxation order 1 for general polynomial optimization problems (POPs) is known to be equivalen ..."
Abstract

Cited by 24 (6 self)
 Add to MetaCart
Abstract. A sensor network localization problem can be formulated as a quadratic optimization problem (QOP). For quadratic optimization problems, semidefinite programming (SDP) relaxation by Lasserre with relaxation order 1 for general polynomial optimization problems (POPs) is known to be equivalent to the sparse SDP relaxation by Waki et al. with relaxation order 1, except the size and sparsity of the resulting SDP relaxation problems. We show that the sparse SDP relaxation applied to the QOP is at least as strong as the BiswasYe SDP relaxation for the sensor network localization problem. A sparse variant of the BiswasYe SDP relaxation, which is equivalent to the original BiswasYe SDP relaxation, is also derived. Numerical results are compared with the BiswasYe SDP relaxation and the edgebased SDP relaxation by Wang et al.. We show that the proposed sparse SDP relaxation is faster than the BiswasYe SDP relaxation. In fact, the computational efficiency in solving the resulting SDP problems increases as the number of anchors and/or the radio range grow. The proposed sparse SDP relaxation also provides more accurate solutions than the edgebased SDP relaxation when exact distances are given between sensors and anchors and there are only a small number of anchors. Key words. Sensor network localization problem, polynomial optimization problem, semidefinite relaxation, sparsity
Further relaxations of the SDP approach to sensor network localization
, 2006
"... Recently, a semidefinite programming (SDP) relaxation approach has been proposed to solve the sensor network localization problem. Although it achieves high accuracy in estimating sensor’s locations, the speed of the SDP approach is not satisfactory for practical applications. In this paper we prop ..."
Abstract

Cited by 23 (0 self)
 Add to MetaCart
Recently, a semidefinite programming (SDP) relaxation approach has been proposed to solve the sensor network localization problem. Although it achieves high accuracy in estimating sensor’s locations, the speed of the SDP approach is not satisfactory for practical applications. In this paper we propose methods to further relax the SDP relaxation; more precisely, to relax the single semidefinite matrix cone into a set of smallsize semidefinite matrix cones, which we call the smaller SDP (SSDP) approach. We present two such relaxations; and they are, although weaker than the original SDP relaxation, retaining the key theoretical property and tested to be both efficient and accurate in computation. The speed of the SSDP is even faster than that of other further weaker approaches. The SSDP approach may also pave a way to efficiently solve general SDP relaxations without sacrificing their solution quality.
Sum of squares methods for sensor network localization
, 2006
"... We formulate the sensor network localization problem as finding the global minimizer of a quartic polynomial. Then sum of squares (SOS) relaxations can be applied to solve it. However, the general SOS relaxations are too expensive to implement for large problems. Exploiting the special features of t ..."
Abstract

Cited by 21 (2 self)
 Add to MetaCart
We formulate the sensor network localization problem as finding the global minimizer of a quartic polynomial. Then sum of squares (SOS) relaxations can be applied to solve it. However, the general SOS relaxations are too expensive to implement for large problems. Exploiting the special features of this polynomial, we propose a new structured SOS relaxation, and discuss its various properties. When distances are given exactly, this SOS relaxation often returns true sensor locations. At each step of interior point methods solving this SOS relaxation, the complexity is O(n 3), where n is the number of sensors. When the distances have small perturbations, we show that the sensor locations given by this SOS relaxation are accurate within a constant factor of the perturbation error under some technical assumptions. The performance of this SOS relaxation is tested on some randomly generated problems.
A Distributed SDP approach for Largescale Noisy Anchorfree Graph Realization with Applications to Molecular Conformation
, 2007
"... We propose a distributed algorithm for solving Euclidean metric realization problems arising from large 3D graphs, using only noisy distance information, and without any prior knowledge of the positions of any of the vertices. In our distributed algorithm, the graph is first subdivided into smaller ..."
Abstract

Cited by 16 (0 self)
 Add to MetaCart
We propose a distributed algorithm for solving Euclidean metric realization problems arising from large 3D graphs, using only noisy distance information, and without any prior knowledge of the positions of any of the vertices. In our distributed algorithm, the graph is first subdivided into smaller subgraphs using intelligent clustering methods. Then a semidefinite programming relaxation and gradient search method is used to localize each subgraph. Finally, a stitching algorithm is used to find affine maps between adjacent clusters and the positions of all points in a global coordinate system are then derived. In particular, we apply our method to the problem of finding the 3D molecular configurations of proteins based on a limited number of given pairwise distances between atoms. The protein molecules, all with known molecular configurations, are taken from the Protein Data Bank. Our algorithm is able to reconstruct reliably and efficiently the configurations of large protein molecules from a limited number of pairwise distances corrupted by noise, without incorporating domain knowledge such as the minimum separation distance constraints derived from van der Waals interactions. 1
Approximation Accuracy, Gradient Methods, and Error Bound for Structured Convex Optimization
, 2009
"... Convex optimization problems arising in applications, possibly as approximations of intractable problems, are often structured and large scale. When the data are noisy, it is of interest to bound the solution error relative to the (unknown) solution of the original noiseless problem. Related to this ..."
Abstract

Cited by 13 (1 self)
 Add to MetaCart
Convex optimization problems arising in applications, possibly as approximations of intractable problems, are often structured and large scale. When the data are noisy, it is of interest to bound the solution error relative to the (unknown) solution of the original noiseless problem. Related to this is an error bound for the linear convergence analysis of firstorder gradient methods for solving these problems. Example applications include compressed sensing, variable selection in regression, TVregularized image denoising, and sensor network localization.
Sensor network localization by eigenvector synchronization over the Euclidean group
 In press
"... We present a new approach to localization of sensors from noisy measurements of a subset of their Euclidean distances. Our algorithm starts by finding, embedding and aligning uniquely realizable subsets of neighboring sensors called patches. In the noisefree case, each patch agrees with its global ..."
Abstract

Cited by 12 (7 self)
 Add to MetaCart
We present a new approach to localization of sensors from noisy measurements of a subset of their Euclidean distances. Our algorithm starts by finding, embedding and aligning uniquely realizable subsets of neighboring sensors called patches. In the noisefree case, each patch agrees with its global positioning up to an unknown rigid motion of translation, rotation and possibly reflection. The reflections and rotations are estimated using the recently developed eigenvector synchronization algorithm, while the translations are estimated by solving an overdetermined linear system. The algorithm is scalable as the number of nodes increases, and can be implemented in a distributed fashion. Extensive numerical experiments show that it compares favorably to other existing algorithms in terms of robustness to noise, sparse connectivity and running time. While our approach is applicable to higher dimensions, in the current paper we focus on the two dimensional case.
(Robust) EdgeBased Semidefinite Programming Relaxation of Sensor Network Localization
 MATH PROGRAM
"... Recently Wang, Zheng, Boyd, and Ye (SIAM J Optim 19:655–673, 2008) proposed a further relaxation of the semidefinite programming (SDP) relaxation of the sensor network localization problem, named edgebased SDP (ESDP). In simulation, the ESDP is solved much faster by interiorpoint method than SDP r ..."
Abstract

Cited by 11 (3 self)
 Add to MetaCart
Recently Wang, Zheng, Boyd, and Ye (SIAM J Optim 19:655–673, 2008) proposed a further relaxation of the semidefinite programming (SDP) relaxation of the sensor network localization problem, named edgebased SDP (ESDP). In simulation, the ESDP is solved much faster by interiorpoint method than SDP relaxation, and the solutions found are comparable or better in approximation accuracy. We study some key properties of the ESDP relaxation, showing that, when distances are exact, zero individual trace is not only sufficient, but also necessary for a sensor to be correctly positioned by an interior solution. We also show via an example that, when distances are inexact, zero individual trace is insufficient for a sensor to be accurately positioned by an interior solution. We then propose a noiseaware robust version of ESDP relaxation for which small individual trace is necessary and sufficient for a sensor to be accurately positioned by a certain analytic center solution, assuming the noise level is sufficiently small. For this analytic center solution, the position error for each sensor is shown to be in the order of the square root of its trace. Lastly, we propose a logbarrier penalty coordinate gradient descent method to find such an analytic center solution. In simulation, this method is much faster than interiorpoint method for solving ESDP, and the solutions found are comparable in approximation accuracy. Moreover, the method can distribute its computation over the sensors via local communication, making it practical for positioning and tracking in real time.
An AsRigidAsPossible Approach to Sensor Network Localization
"... We present a novel approach to localization of sensors in a network given a subset of noisy intersensor distances. The algorithm is based on “stitching” together local structures by solving an optimization problem requiring the structures to fit together in an “AsRigidAsPossible ” manner, hence ..."
Abstract

Cited by 10 (3 self)
 Add to MetaCart
We present a novel approach to localization of sensors in a network given a subset of noisy intersensor distances. The algorithm is based on “stitching” together local structures by solving an optimization problem requiring the structures to fit together in an “AsRigidAsPossible ” manner, hence the name ARAP. The local structures consist of reference “patches” and reference triangles, both obtained from intersensor distances. We elaborate on the relationship between the ARAP algorithm and other stateoftheart algorithms, and provide experimental results demonstrating that ARAP is significantly less sensitive to sparse connectivity and measurement noise. We also show how ARAP may be distributed.