Results 1  10
of
120
Exact Matrix Completion via Convex Optimization
, 2008
"... We consider a problem of considerable practical interest: the recovery of a data matrix from a sampling of its entries. Suppose that we observe m entries selected uniformly at random from a matrix M. Can we complete the matrix and recover the entries that we have not seen? We show that one can perfe ..."
Abstract

Cited by 873 (26 self)
 Add to MetaCart
We consider a problem of considerable practical interest: the recovery of a data matrix from a sampling of its entries. Suppose that we observe m entries selected uniformly at random from a matrix M. Can we complete the matrix and recover the entries that we have not seen? We show that one can perfectly recover most lowrank matrices from what appears to be an incomplete set of entries. We prove that if the number m of sampled entries obeys m ≥ C n 1.2 r log n for some positive numerical constant C, then with very high probability, most n × n matrices of rank r can be perfectly recovered by solving a simple convex optimization program. This program finds the matrix with minimum nuclear norm that fits the data. The condition above assumes that the rank is not too large. However, if one replaces the 1.2 exponent with 1.25, then the result holds for all values of the rank. Similar results hold for arbitrary rectangular matrices as well. Our results are connected with the recent literature on compressed sensing, and show that objects other than signals and images can be perfectly reconstructed from very limited information.
A simpler approach to matrix completion
 the Journal of Machine Learning Research
"... This paper provides the best bounds to date on the number of randomly sampled entries required to reconstruct an unknown low rank matrix. These results improve on prior work by Candès and Recht [4], Candès and Tao [7], and Keshavan, Montanari, and Oh [18]. The reconstruction is accomplished by minim ..."
Abstract

Cited by 158 (6 self)
 Add to MetaCart
This paper provides the best bounds to date on the number of randomly sampled entries required to reconstruct an unknown low rank matrix. These results improve on prior work by Candès and Recht [4], Candès and Tao [7], and Keshavan, Montanari, and Oh [18]. The reconstruction is accomplished by minimizing the nuclear norm, or sum of the singular values, of the hidden matrix subject to agreement with the provided entries. If the underlying matrix satisfies a certain incoherence condition, then the number of entries required is equal to a quadratic logarithmic factor times the number of parameters in the singular value decomposition. The proof of this assertion is short, self contained, and uses very elementary analysis. The novel techniques herein are based on recent work in quantum information theory.
Further relaxation of the semidefinite programming approach to sensor network localization
 SIAM Journal on Optimization
, 2008
"... Abstract. Recently, a semidefinite programming (SDP) relaxation approach has been proposed to solve the sensor network localization problem. Although it achieves high accuracy in estimating the sensor locations, the speed of the SDP approach is not satisfactory for practical applications. In this pa ..."
Abstract

Cited by 41 (3 self)
 Add to MetaCart
(Show Context)
Abstract. Recently, a semidefinite programming (SDP) relaxation approach has been proposed to solve the sensor network localization problem. Although it achieves high accuracy in estimating the sensor locations, the speed of the SDP approach is not satisfactory for practical applications. In this paper we propose methods to further relax the SDP relaxation, more precisely, to relax the single semidefinite matrix cone into a set of smallsize semidefinite submatrix cones, which we call a subSDP (SSDP) approach. We present two such relaxations. Although they are weaker than the original SDP relaxation, they retain the key theoretical property, and numerical experiments show that they are both efficient and accurate. The speed of the SSDP is even faster than that of other approaches based on weaker relaxations. The SSDP approach may also pave a way to efficiently solving general SDP problems without sacrificing the solution quality.
Approximation Accuracy, Gradient Methods, and Error Bound for Structured Convex Optimization
, 2009
"... Convex optimization problems arising in applications, possibly as approximations of intractable problems, are often structured and large scale. When the data are noisy, it is of interest to bound the solution error relative to the (unknown) solution of the original noiseless problem. Related to this ..."
Abstract

Cited by 38 (1 self)
 Add to MetaCart
(Show Context)
Convex optimization problems arising in applications, possibly as approximations of intractable problems, are often structured and large scale. When the data are noisy, it is of interest to bound the solution error relative to the (unknown) solution of the original noiseless problem. Related to this is an error bound for the linear convergence analysis of firstorder gradient methods for solving these problems. Example applications include compressed sensing, variable selection in regression, TVregularized image denoising, and sensor network localization.
Resilient localization for sensor networks in outdoor environments
 In International Conference on Distributed Computing Systems. IEEE Computer Society
, 2005
"... The process of determining the physical locations of nodes in a wireless sensor network is known as localization. Selflocalization is critical for largescale sensor networks, because manual or assisted localization is often impractical due to time requirements, economic constraints, or inherent li ..."
Abstract

Cited by 38 (1 self)
 Add to MetaCart
(Show Context)
The process of determining the physical locations of nodes in a wireless sensor network is known as localization. Selflocalization is critical for largescale sensor networks, because manual or assisted localization is often impractical due to time requirements, economic constraints, or inherent limitations of the deployment scenarios. We propose scalable solutions for reliably localizing wireless sensor networks in environments conducive to several types of ranging errors. We follow a hybrid hardwaresoftware approach for acoustic ranging or radio interferometry to acquire internode distance measurements, and a resilient selflocalization algorithm to compute the node location estimates. The acoustic ranging method improves on previous work, extending the practical measurement range up to 35m in grassy outdoor environments, achieving a distanceinvariant median measurement error of about 1 % (33cm). The localization algorithm is based on Least Squares Scaling with soft constraints. Empirical evaluation using ranging results obtained from sensor network field experiments and simulations confirms that our approach is more resilient than multidimensional scaling (MDS) algorithms against largemagnitude ranging errors and sparse range measurements: conditions that are common in largescale outdoor sensor
Secondorder cone programming relaxation of sensor network localization
 SIAM J. Optimization
, 2007
"... Abstract. The sensor network localization problem has been much studied. Recently Biswas and Ye proposed a semidefinite programming (SDP) relaxation of this problem which has various nice properties and for which a number of solution methods have been proposed. Here, we study a secondorder cone pro ..."
Abstract

Cited by 36 (2 self)
 Add to MetaCart
(Show Context)
Abstract. The sensor network localization problem has been much studied. Recently Biswas and Ye proposed a semidefinite programming (SDP) relaxation of this problem which has various nice properties and for which a number of solution methods have been proposed. Here, we study a secondorder cone programming (SOCP) relaxation of this problem, motivated by its simpler structure and its potential to be solved faster than SDP. We show that the SOCP relaxation, though weaker than the SDP relaxation, has nice properties that make it useful as a problem preprocessor. In particular, sensors that are uniquely positioned among interior solutions of the SOCP relaxation are accurate up to the square root of the distance error. Thus, these sensors, which are easily identified, are accurately positioned. In our numerical simulation, the interior solution found can accurately position up to 80–90 % of the sensors. We also propose a smoothing coordinate gradient descent method for finding an interior solution that is faster than an interiorpoint method. Key words. sensor network localization, semidefinite program, secondorder cone program, approximation algorithm, error bound
Exploiting sparsity in SDP relaxation for sensor network localization
 SIAM J. Optim
, 2009
"... Abstract. A sensor network localization problem can be formulated as a quadratic optimization problem (QOP). For quadratic optimization problems, semidefinite programming (SDP) relaxation by Lasserre with relaxation order 1 for general polynomial optimization problems (POPs) is known to be equivalen ..."
Abstract

Cited by 34 (9 self)
 Add to MetaCart
(Show Context)
Abstract. A sensor network localization problem can be formulated as a quadratic optimization problem (QOP). For quadratic optimization problems, semidefinite programming (SDP) relaxation by Lasserre with relaxation order 1 for general polynomial optimization problems (POPs) is known to be equivalent to the sparse SDP relaxation by Waki et al. with relaxation order 1, except the size and sparsity of the resulting SDP relaxation problems. We show that the sparse SDP relaxation applied to the QOP is at least as strong as the BiswasYe SDP relaxation for the sensor network localization problem. A sparse variant of the BiswasYe SDP relaxation, which is equivalent to the original BiswasYe SDP relaxation, is also derived. Numerical results are compared with the BiswasYe SDP relaxation and the edgebased SDP relaxation by Wang et al.. We show that the proposed sparse SDP relaxation is faster than the BiswasYe SDP relaxation. In fact, the computational efficiency in solving the resulting SDP problems increases as the number of anchors and/or the radio range grow. The proposed sparse SDP relaxation also provides more accurate solutions than the edgebased SDP relaxation when exact distances are given between sensors and anchors and there are only a small number of anchors. Key words. Sensor network localization problem, polynomial optimization problem, semidefinite relaxation, sparsity
Large margin hidden markov models for speech recognition
, 2005
"... In this work, motivated by large margin classifiers in machine learning, we propose a novel method to estimate continuous density hidden Markov model (CDHMM) for speech recognition according to the principle of maximizing the minimum muticlass separation margin. The approach is named as large margi ..."
Abstract

Cited by 33 (4 self)
 Add to MetaCart
In this work, motivated by large margin classifiers in machine learning, we propose a novel method to estimate continuous density hidden Markov model (CDHMM) for speech recognition according to the principle of maximizing the minimum muticlass separation margin. The approach is named as large margin HMM. Firstly, we show this type of large margin HMM estimation problem can be formulated as a constrained minimax optimization problem. Secondly, by imposing different constraints to the minimax problem, we propose three solutions to the large margin HMM estimation problem, namely the iterative localized optimization method, the constrained joint optimization method and the semidefinite programming (SDP) method. These new training methods are evaluated in the isolated Eset recognition task using ISOLET database and the TIDIGITS connected digit string recognition task. Experimental results clearly show that the large margin HMMs consistently outperform the conventional HMM training methods. It has been consistently observed that the large margin training method yields significant recognition error rate reduction even on top of some popular discriminative training methods.
SpaseLoc: An adaptive subproblem algorithm for scalable wireless sensor network localization
 SIAM J. on Optimization, submitted
, 2004
"... Abstract. An adaptive rulebased algorithm, SpaseLoc, is described to solve localization problems for ad hoc wireless sensor networks. A large problem is solved as a sequence of very small subproblems, each of which is solved by semidefinite programming relaxation of a geometric optimization model. ..."
Abstract

Cited by 32 (4 self)
 Add to MetaCart
(Show Context)
Abstract. An adaptive rulebased algorithm, SpaseLoc, is described to solve localization problems for ad hoc wireless sensor networks. A large problem is solved as a sequence of very small subproblems, each of which is solved by semidefinite programming relaxation of a geometric optimization model. The subproblems are generated according to a set of sensor/anchor selection rules. Computational results compared with existing approaches show that the SpaseLoc algorithm scales well and provides excellent localization accuracy.