Results 1  10
of
58
A unified framework for highdimensional analysis of Mestimators with decomposable regularizers
"... ..."
Estimation of (near) lowrank matrices with noise and highdimensional scaling
"... We study an instance of highdimensional statistical inference in which the goal is to use N noisy observations to estimate a matrix Θ ∗ ∈ R k×p that is assumed to be either exactly low rank, or “near ” lowrank, meaning that it can be wellapproximated by a matrix with low rank. We consider an Me ..."
Abstract

Cited by 36 (11 self)
 Add to MetaCart
(Show Context)
We study an instance of highdimensional statistical inference in which the goal is to use N noisy observations to estimate a matrix Θ ∗ ∈ R k×p that is assumed to be either exactly low rank, or “near ” lowrank, meaning that it can be wellapproximated by a matrix with low rank. We consider an Mestimator based on regularization by the traceornuclearnormovermatrices, andanalyze its performance under highdimensional scaling. We provide nonasymptotic bounds on the Frobenius norm error that hold for a generalclassofnoisyobservationmodels,and apply to both exactly lowrank and approximately lowrank matrices. We then illustrate their consequences for a number of specific learning models, including lowrank multivariate or multitask regression, system identification in vector autoregressive processes, and recovery of lowrank matrices from random projections. Simulations show excellent agreement with the highdimensional scaling of the error predicted by our theory. 1.
Restricted strong convexity and (weighted) matrix completion: Optimal bounds with noise
, 2010
"... We consider the matrix completion problem under a form of row/column weighted entrywise sampling, including the case of uniform entrywise sampling as a special case. We analyze the associated random observation operator, and prove that with high probability, it satisfies a form of restricted strong ..."
Abstract

Cited by 34 (6 self)
 Add to MetaCart
We consider the matrix completion problem under a form of row/column weighted entrywise sampling, including the case of uniform entrywise sampling as a special case. We analyze the associated random observation operator, and prove that with high probability, it satisfies a form of restricted strong convexity with respect to weighted Frobenius norm. Using this property, we obtain as corollaries a number of error bounds on matrix completion in the weighted Frobenius norm under noisy sampling and for both exact and near lowrank matrices. Our results are based on measures of the “spikiness ” and “lowrankness ” of matrices that are less restrictive than the incoherence conditions imposed in previous work. Our technique involves an Mestimator that includes controls on both the rank and spikiness of the solution, and we establish nonasymptotic error bounds in weighted Frobenius norm for recovering matrices lying with ℓq“balls ” of bounded spikiness. Using informationtheoretic methods, we show that no algorithm can achieve better estimates (up to a logarithmic factor) over these same sets, showing that our conditions on matrices and associated rates are essentially optimal.
SpaRCS: Recovering lowrank and sparse matrices from compressive measurements
, 2011
"... We consider the problem of recovering a matrix M that is the sum of a lowrank matrix L and a sparse matrix S from a small set of linear measurements of the form y = A(M) =A(L + S). This model subsumes three important classes of signal recovery problems: compressive sensing, affine rank minimization ..."
Abstract

Cited by 13 (1 self)
 Add to MetaCart
(Show Context)
We consider the problem of recovering a matrix M that is the sum of a lowrank matrix L and a sparse matrix S from a small set of linear measurements of the form y = A(M) =A(L + S). This model subsumes three important classes of signal recovery problems: compressive sensing, affine rank minimization, and robust principal component analysis. We propose a natural optimization problem for signal recovery under this model and develop a new greedy algorithm called SpaRCS to solve it. Empirically, SpaRCS inherits a number of desirable properties from the stateoftheart CoSaMP and ADMiRA algorithms, including exponential convergence and efficient implementation. Simulation results with video compressive sensing, hyperspectral imaging, and robust matrix completion data sets demonstrate both the accuracy and efficacy of the algorithm. 1
Learning with the Weighted Tracenorm under Arbitrary Sampling Distributions
"... We provide rigorous guarantees on learning with the weighted tracenorm under arbitrary sampling distributions. We show that the standard weightedtrace norm might fail when the sampling distribution is not a product distribution (i.e. when row and column indexes are not selected independently), pre ..."
Abstract

Cited by 9 (4 self)
 Add to MetaCart
(Show Context)
We provide rigorous guarantees on learning with the weighted tracenorm under arbitrary sampling distributions. We show that the standard weightedtrace norm might fail when the sampling distribution is not a product distribution (i.e. when row and column indexes are not selected independently), present a corrected variant for which we establish strong learning guarantees, and demonstrate that it works better in practice. We provide guarantees when weighting by either the true or empirical sampling distribution, and suggest that even if the true distribution is known (or is uniform), weighting by the empirical distribution may be beneficial. 1
Concentrationbased guarantees for lowrank matrix reconstruction
 24th Annual Conference on Learning Theory (COLT
, 2011
"... We consider the problem of approximately reconstructing a partiallyobserved, approximately lowrank matrix. This problem has received much attention lately, mostly using the tracenorm as a surrogate to the rank. Here we study lowrank matrix reconstruction using both the tracenorm, as well as the ..."
Abstract

Cited by 7 (2 self)
 Add to MetaCart
We consider the problem of approximately reconstructing a partiallyobserved, approximately lowrank matrix. This problem has received much attention lately, mostly using the tracenorm as a surrogate to the rank. Here we study lowrank matrix reconstruction using both the tracenorm, as well as the lessstudied maxnorm, and present reconstruction guarantees based on existing analysis on the Rademacher complexity of the unit balls of these norms. We show how these are superior in several ways to recently published guarantees based on specialized analysis.
Lowrank matrix completion by riemannian optimization
 ANCHPMATHICSE, Mathematics Section, École Polytechnique Fédérale de
"... The matrix completion problem consists of finding or approximating a lowrank matrix based on a few samples of this matrix. We propose a novel algorithm for matrix completion that minimizes the least square distance on the sampling set over the Riemannian manifold of fixedrank matrices. The algorit ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
(Show Context)
The matrix completion problem consists of finding or approximating a lowrank matrix based on a few samples of this matrix. We propose a novel algorithm for matrix completion that minimizes the least square distance on the sampling set over the Riemannian manifold of fixedrank matrices. The algorithm is an adaptation of classical nonlinear conjugate gradients, developed within the framework of retractionbased optimization on manifolds. We describe all the necessary objects from differential geometry necessary to perform optimization over this lowrank matrix manifold, seen as a submanifold embedded in the space of matrices. In particular, we describe how metric projection can be used as retraction and how vector transport lets us obtain the conjugate search directions. Additionally, we derive secondorder models that can be used in Newton’s method based on approximating the exponential map on this manifold to second order. Finally, we prove convergence of a regularized version of our algorithm under the assumption that the restricted isometry property holds for incoherent matrices throughout the iterations. The numerical experiments indicate that our approach scales very well for largescale problems and compares favorable with the stateoftheart, while outperforming most existing solvers. 1
LOWRANK MATRIX RECOVERY VIA ITERATIVELY REWEIGHTED LEAST SQUARES MINIMIZATION
"... Abstract. We present and analyze an efficient implementation of an iteratively reweighted least squares algorithm for recovering a matrix from a small number of linear measurements. The algorithm is designed for the simultaneous promotion of both a minimal nuclear norm and an approximatively lowran ..."
Abstract

Cited by 6 (1 self)
 Add to MetaCart
(Show Context)
Abstract. We present and analyze an efficient implementation of an iteratively reweighted least squares algorithm for recovering a matrix from a small number of linear measurements. The algorithm is designed for the simultaneous promotion of both a minimal nuclear norm and an approximatively lowrank solution. Under the assumption that the linear measurements fulfill a suitable generalization of the Null Space Property known in the context of compressed sensing, the algorithm is guaranteed to recover iteratively any matrix with an error of the order of the best krank approximation. In certain relevant cases, for instance for the matrix completion problem, our version of this algorithm can take advantage of the Woodbury matrix identity, which allows to expedite the solution of the least squares problems required at each iteration. We present numerical experiments which confirm the robustness of the algorithm for the solution of matrix completion problems, and demonstrate its competitiveness with respect to other techniques proposed recently in the literature. AMS subject classification: 65J22, 65K10, 52A41, 49M30. Key Words: lowrank matrix recovery, iteratively reweighted least squares, matrix completion.
Transfer Learning to Predict Missing Ratings via Heterogeneous User Feedbacks
"... Data sparsity due to missing ratings is a major challenge for collaborative filtering (CF) techniques in recommender systems. This is especially true for CF domains where the ratings are expressed numerically. We observe that, while we may lack the information in numerical ratings, we may have more ..."
Abstract

Cited by 3 (3 self)
 Add to MetaCart
Data sparsity due to missing ratings is a major challenge for collaborative filtering (CF) techniques in recommender systems. This is especially true for CF domains where the ratings are expressed numerically. We observe that, while we may lack the information in numerical ratings, we may have more data in the form of binary ratings. This is especially true when users can easily express themselves with their likes and dislikes for certain items. In this paper, we explore how to use the binary preference data expressed in the form of like/dislike to help reduce the impact of data sparsity of more expressive numerical ratings. We do this by transferring the rating knowledge from some auxiliary data source in binary form (that is, likes or dislikes), to a target numerical rating matrix. Our solution is to model both numerical ratings and like/dislike in a principled way, using a novel framework of Transfer by Collective Factorization (TCF). In particular, we construct the shared latent space collectively and learn the datadependent effect separately. A major advantage of the TCF approach over previous collective matrix factorization (or bifactorization) methods is that we are able to capture the datadependent effect when sharing the dataindependent knowledge, so as to increase the overall quality of knowledge transfer. Experimental results demonstrate the effectiveness of TCF at various sparsity levels as compared to several stateoftheart methods. 1
Linear regression under fixedrank constraints: a Riemannian approach
 In 28th International Conference on Machine Learning. ICML
, 2011
"... In this paper, we tackle the problem of learning a linear regression model whose parameter is a fixedrank matrix. We study the Riemannian manifold geometry of the set of fixedrank matrices and develop efficient linesearch algorithms. The proposed algorithms have many applications, scale to highdi ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
In this paper, we tackle the problem of learning a linear regression model whose parameter is a fixedrank matrix. We study the Riemannian manifold geometry of the set of fixedrank matrices and develop efficient linesearch algorithms. The proposed algorithms have many applications, scale to highdimensional problems, enjoy local convergence properties and confer a geometric basis to recent contributions on learning fixedrank matrices. Numerical experiments on benchmarks suggest that the proposed algorithms compete with the stateoftheart, and that manifold optimization offers a versatile framework for the design of rankconstrained machine learning algorithms. 1.