• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • Donate

CiteSeerX logo

Advanced Search Include Citations
Advanced Search Include Citations | Disambiguate

Noisy matrix completion under sparse factor models. arXiv preprint arXiv:1411.0282v1, (2014)

by A Soni, S Jain, J Haupt, S Gonella
Add To MetaCart

Tools

Sorted by:
Results 1 - 2 of 2

Probabilistic Low-Rank Matrix Completion from Quantized Measurements

by Sonia A Bhaskar , 2016
"... Abstract We consider the recovery of a low rank real-valued matrix M given a subset of noisy discrete (or quantized) measurements. Such problems arise in several applications such as collaborative filtering, learning and content analytics, and sensor network localization. We consider constrained ma ..."
Abstract - Add to MetaCart
Abstract We consider the recovery of a low rank real-valued matrix M given a subset of noisy discrete (or quantized) measurements. Such problems arise in several applications such as collaborative filtering, learning and content analytics, and sensor network localization. We consider constrained maximum likelihood estimation of M , under a constraint on the entrywise infinity-norm of M and an exact rank constraint. We provide upper bounds on the Frobenius norm of matrix estimation error under this model. Previous theoretical investigations have focused on binary (1-bit) quantizers, and been based on convex relaxation of the rank. Compared to the existing binary results, our performance upper bound has faster convergence rate with matrix dimensions when the fraction of revealed observations is fixed. We also propose a globally convergent optimization algorithm based on low rank factorization of M and validate the method on synthetic and real data, with improved performance over previous methods.
(Show Context)

Citation Context

...osed a convex program using maximum likelihood estimation and a nuclear (or trace) norm to promote a low-rank solution. Both works present theoretical recovery guarantees for the estimate, with the latter improving the convergence rate of the upper bound on the error. In Cai and Zhou (2013), a constrained maximum likelihood estimator was also considered but with the max-norm in place of the nuclear norm. Upper and lower bounds on the error norm of the solution to the resulting convex program were also given of the same order as Davenport et al. (2014). The binary model is also investigated in Soni et al. (2014) for sparse factor models using maximum likelihood estimation with an exact low-rank constraint; their results apply to non-sparse models also. 2 Probabilistic Low-Rank Matrix Completion The theoretical recovery guarantee for the estimate given in Soni et al. (2014) is in the form of an upper bound on the expectation of the error norm, in contrast to Davenport et al. (2014), Lafond et al. (2014) and Cai and Zhou (2013), where the (high probability) upper bounds on the error norm itself are given. The bounds presented in this paper are also on the error norm, not on its expectation. The extensi...

Error Bounds for Maximum Likelihood Matrix Completion Under Sparse Factor Models

by Akshay Soni, Swayambhoo Jain, Jarvis Haupt, Stefano Gonella
"... Abstract—This paper examines a general class of matrix completion tasks where entry wise observations of the matrix are subject to random noise or corruption. Our particular focus here is on settings where the matrix to be estimated follows a sparse factor model, in the sense that it may be expresse ..."
Abstract - Add to MetaCart
Abstract—This paper examines a general class of matrix completion tasks where entry wise observations of the matrix are subject to random noise or corruption. Our particular focus here is on settings where the matrix to be estimated follows a sparse factor model, in the sense that it may be expressed as the product of two matrices, one of which is sparse. We analyze the performance of a sparsity-penalized maximum likelihood approach to such problems to provide a general-purpose estimation result applicable to any of a number of noise/corruption models, and describe its implications in two stylized scenarios – one characterized by additive Gaussian noise, and the other by highly-quantized one-bit observations. We also provide some supporting empirical evidence to validate our theoretical claims in the Gaussian setting. Index Terms—Complexity regularization, matrix completion, maxi-mum likelihood, sparse estimation I.
(Show Context)

Citation Context

...idates our theoretical guarantees for the Gaussian noise setting. We provide a few brief conclusions in Section IV. We state our results here without proof; the full details will be made available in =-=[34]-=-. D. Some Information Theoretic Preliminaries To set the stage for the statement of our main result, we remind the reader of a few key concepts. First, when p(Y ) and q(Y ) denote the pdf (or pmf) of ...

Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2016 The Pennsylvania State University