Results 1  10
of
12,262
LowRank Matrix Approximation with Stability
"... Abstract Lowrank matrix approximation has been widely adopted in machine learning applications with sparse data, such as recommender systems. However, the sparsity of the data, incomplete and noisy, introduces challenges to the algorithm stability small changes in the training data may significan ..."
Abstract
 Add to MetaCart
Abstract Lowrank matrix approximation has been widely adopted in machine learning applications with sparse data, such as recommender systems. However, the sparsity of the data, incomplete and noisy, introduces challenges to the algorithm stability small changes in the training data may
Local LowRank Matrix Approximation
"... Matrix approximation is a common tool in recommendation systems, text mining, and computer vision. A prevalent assumption in constructing matrix approximations is that the partially observed matrix is of lowrank. We propose a new matrix approximation model where we assume instead that the matrix is ..."
Abstract

Cited by 5 (1 self)
 Add to MetaCart
Matrix approximation is a common tool in recommendation systems, text mining, and computer vision. A prevalent assumption in constructing matrix approximations is that the partially observed matrix is of lowrank. We propose a new matrix approximation model where we assume instead that the matrix
Convergence of Gradient Descent for LowRank Matrix Approximation
"... AbstractThis paper provides a proof of global convergence of gradient search for lowrank matrix approximation. Such approximations have recently been of interest for large scale problems, as well as for dictionary learning for sparse signal representations and matrix completion. The proof is base ..."
Abstract
 Add to MetaCart
AbstractThis paper provides a proof of global convergence of gradient search for lowrank matrix approximation. Such approximations have recently been of interest for large scale problems, as well as for dictionary learning for sparse signal representations and matrix completion. The proof
Adaptive Sampling and Fast LowRank Matrix Approximation
 ELECTRONIC COLLOQUIUM ON COMPUTATIONAL COMPLEXITY, REPORT NO. 42 (2006)
, 2006
"... We prove that any real matrix A contains a subset of at most 4k/ɛ+2k log(k+1) rows whose span “contains ” a matrix of rank at most k with error only (1+ɛ) times the error of the best rankk approximation of A. This leads to an algorithm to find such an approximation with complexity essentially O(Mk/ɛ ..."
Abstract

Cited by 57 (3 self)
 Add to MetaCart
(Mk/ɛ), where M is the number of nonzero entries of A. The algorithm maintains sparsity, and in the streaming model, it can be implemented using only 2(k + 1)(log(k + 1) + 1) passes over the input matrix. Previous algorithms for lowrank approximation use only one or two passes but obtain an additive
A Schur Method for LowRank Matrix Approximation
, 1996
"... This paper describes a much simpler generalized Schurtype algorithm to compute similar lowrank approximants. For a given matrix H which has d singular values larger than e, we find all rank d approximants H such that ..."
Abstract

Cited by 23 (10 self)
 Add to MetaCart
This paper describes a much simpler generalized Schurtype algorithm to compute similar lowrank approximants. For a given matrix H which has d singular values larger than e, we find all rank d approximants H such that
Relative Errors for Deterministic LowRank Matrix Approximations
 In SODA
, 2014
"... Abstract We consider processing an n × d matrix A in a stream with rowwise updates according to a recent algorithm called Frequent Directions (Liberty, KDD 2013). This algorithm maintains an × d matrix Q deterministically, processing each row in O(d 2 ) time; the processing time can be decreased t ..."
Abstract

Cited by 14 (2 self)
 Add to MetaCart
Abstract We consider processing an n × d matrix A in a stream with rowwise updates according to a recent algorithm called Frequent Directions (Liberty, KDD 2013). This algorithm maintains an × d matrix Q deterministically, processing each row in O(d 2 ) time; the processing time can be decreased
Fast and Memory Optimal LowRank Matrix Approximation
"... Abstract In this paper, we revisit the problem of constructing a nearoptimal rank k approximation of a matrix M ∈ [0, 1] m×n under the streaming data model where the columns of M are revealed sequentially. We present SLA (Streaming Lowrank Approximation), an algorithm that is asymptotically acc ..."
Abstract
 Add to MetaCart
Abstract In this paper, we revisit the problem of constructing a nearoptimal rank k approximation of a matrix M ∈ [0, 1] m×n under the streaming data model where the columns of M are revealed sequentially. We present SLA (Streaming Lowrank Approximation), an algorithm that is asymptotically
Fast Computation of Low Rank Matrix Approximations
, 2001
"... In many practical applications, given an m n matrix A it is of interest to nd an approximation to A that has low rank. We introduce a technique that exploits spectral structure in A to accelerate Orthogonal Iteration and Lanczos Iteration, the two most common methods for computing such approximat ..."
Abstract

Cited by 165 (5 self)
 Add to MetaCart
In many practical applications, given an m n matrix A it is of interest to nd an approximation to A that has low rank. We introduce a technique that exploits spectral structure in A to accelerate Orthogonal Iteration and Lanczos Iteration, the two most common methods for computing
A Scalable Approach to ColumnBased LowRank Matrix Approximation
"... In this paper, we address the columnbased lowrank matrix approximation problem using a novel parallel approach. Our approach is based on the divideandcombine idea. We first perform column selection on submatrices of an original data matrix in parallel, and then combine the selected columns into t ..."
Abstract
 Add to MetaCart
In this paper, we address the columnbased lowrank matrix approximation problem using a novel parallel approach. Our approach is based on the divideandcombine idea. We first perform column selection on submatrices of an original data matrix in parallel, and then combine the selected columns
Low rank matrix approximation in linear time
, 2006
"... Given a matrix M with n rows and d columns, and fixed k and ε, we present an algorithm that in linear time (i.e., O(N)) computes a krank matrix B with approximation error ‖M − B ‖ 2 F ≤ (1 + ε)µopt(M, k), where N = nd is the input size, and µopt(M, k) is the minimum error of a krank approximation ..."
Abstract

Cited by 24 (0 self)
 Add to MetaCart
Given a matrix M with n rows and d columns, and fixed k and ε, we present an algorithm that in linear time (i.e., O(N)) computes a krank matrix B with approximation error ‖M − B ‖ 2 F ≤ (1 + ε)µopt(M, k), where N = nd is the input size, and µopt(M, k) is the minimum error of a krank approximation
Results 1  10
of
12,262