Results 1  10
of
604
On Approximate Reasoning Capabilities of LowRank Vector Spaces
"... In relational databases, relations between objects, represented by binary matrices or tensors, may be arbitrarily complex. In practice however, there are recurring relational patterns such as transitive, permutation, and sequential relationships, that have a regular structure which is not captured b ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
the field of information complexity called sign rank. Sign rank is a more appropriate complexity measure as it is low for transitive, permutation, or sequential relationships, while being suitably large, with a high probability, for uniformly sampled binary matrices/tensors.
Efficient SVM training using lowrank kernel representations
 Journal of Machine Learning Research
, 2001
"... SVM training is a convex optimization problem which scales with the training set size rather than the feature space dimension. While this is usually considered to be a desired quality, in large scale problems it may cause training to be impractical. The common techniques to handle this difficulty ba ..."
Abstract

Cited by 240 (3 self)
 Add to MetaCart
method (IPM) in terms of storage requirements as well as computational complexity. We then suggest an efficient use of a known factorization technique to approximate a given kernel matrix by a low rank matrix, which in turn will be used to feed the optimizer. Finally, we derive an upper bound
Robust Recovery of Subspace Structures by LowRank Representation
"... In this work we address the subspace recovery problem. Given a set of data samples (vectors) approximately drawn from a union of multiple subspaces, our goal is to segment the samples into their respective subspaces and correct the possible errors as well. To this end, we propose a novel method ter ..."
Abstract

Cited by 128 (24 self)
 Add to MetaCart
termed LowRank Representation (LRR), which seeks the lowestrank representation among all the candidates that can represent the data samples as linear combinations of the bases in a given dictionary. It is shown that LRR well solves the subspace recovery problem: when the data is clean, we prove
Adaptive Duplicate Detection Using Learnable String Similarity Measures
 In Proceedings of the Ninth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD2003
, 2003
"... The problem of identifying approximately duplicate records in databases is an essential step for data cleaning and data integration processes. Most existing approaches have relied on generic or manually tuned distance metrics for estimating the similarity of potential duplicates. In this paper, we p ..."
Abstract

Cited by 344 (14 self)
 Add to MetaCart
's domain. We present two learnable text similarity measures suitable for this task: an extended variant of learnable string edit distance, and a novel vectorspace based measure that employs a Support Vector Machine (SVM) for training. Experimental results on a range of datasets show that our framework can
Relative Errors for Deterministic LowRank Matrix Approximations
 In SODA
, 2014
"... Abstract We consider processing an n × d matrix A in a stream with rowwise updates according to a recent algorithm called Frequent Directions (Liberty, KDD 2013). This algorithm maintains an × d matrix Q deterministically, processing each row in O(d 2 ) time; the processing time can be decreased t ..."
Abstract

Cited by 14 (2 self)
 Add to MetaCart
to O(d ) with a slight modification in the algorithm and a constant increase in space. Then for any unit vector x, the matrix Q satisfies We show that if one sets = k + k/ε and returns Q k , a k × d matrix that is simply the top k rows of Q, then we achieve the following properties:
Sharp analysis of lowrank kernel matrix approximations
 JMLR: WORKSHOP AND CONFERENCE PROCEEDINGS VOL 30 (2013) 1–25
, 2013
"... We consider supervised learning problems within the positivedefinite kernel framework, such as kernel ridge regression, kernel logistic regression or the support vector machine. With kernels leading to infinitedimensional feature spaces, a common practical limiting difficulty is the necessity of c ..."
Abstract

Cited by 13 (1 self)
 Add to MetaCart
of computing the kernel matrix, which most frequently leads to algorithms with running time at least quadratic in the number of observations n, i.e., O(n 2). Lowrank approximations of the kernel matrix are often considered as they allow the reduction of running time complexities to O(p 2 n), where p
Lecture 5: Randomized methods for lowrank approximation
, 2014
"... Research support by: Goal: Given an m × n matrix A, we seek to compute a rankk approximation, with k n, A ≈ U D V ∗ = k∑ j=1 σj uj v∗j, m × n m × k k × k k × n where σ1 ≥ σ2 ≥ · · · ≥ σk ≥ 0 are the (approximate) singular values of A u1, u2,..., uk are orthonormal, the (approximate) left singu ..."
Abstract
 Add to MetaCart
singular vectors of A, and v1, v2,..., vk are orthonormal, the (approximate) right singular vectors of A. The methods presented are capable of solving other closely related problems: • Interpolative decompositions, A ≈ UA(row), where A(row) consists of k rows of A. • Partial LUfactorization, A ≈ L(k) U
Lowrank matrix completion by riemannian optimization
 ANCHPMATHICSE, Mathematics Section, École Polytechnique Fédérale de
"... The matrix completion problem consists of finding or approximating a lowrank matrix based on a few samples of this matrix. We propose a novel algorithm for matrix completion that minimizes the least square distance on the sampling set over the Riemannian manifold of fixedrank matrices. The algorit ..."
Abstract

Cited by 40 (4 self)
 Add to MetaCart
The matrix completion problem consists of finding or approximating a lowrank matrix based on a few samples of this matrix. We propose a novel algorithm for matrix completion that minimizes the least square distance on the sampling set over the Riemannian manifold of fixedrank matrices
Largescale convex minimization with a lowrank constraint
 In Proceedings of the 28th International Conference on Machine Learning
, 2011
"... We address the problem of minimizing a convex function over the space of large matrices with low rank. While this optimization problem is hard in general, we propose an efficient greedy algorithm and derive its formal approximation guarantees. Each iteration of the algorithm involves (approximately) ..."
Abstract

Cited by 40 (1 self)
 Add to MetaCart
We address the problem of minimizing a convex function over the space of large matrices with low rank. While this optimization problem is hard in general, we propose an efficient greedy algorithm and derive its formal approximation guarantees. Each iteration of the algorithm involves (approximately
Authors ’ response to the referee’s report on “Software for weighted structured lowrank approximation”
, 2013
"... We thank the referee for their relevant and useful comments. In this document, we quote in bold face comments/questions from the report. Our replies follow in ordinary print. In blue, we quote passages from the revised manuscript. • "since the authors introduce a new interface to the statistica ..."
Abstract
 Add to MetaCart
is present and the asymptotic analysis is applicable. As illustrated by the simulation results in the revised Section 5.1, the solution of the structured lowrank approximation problem yields a consistent estimator in the errorsinvariables setup. (This is an empirical confirmation for the theoretical
Results 1  10
of
604