Results 1  10
of
370
SmallWorld Grid
, 2005
"... As the Grid matures the problem of resource discovery across communities, where resources now include computational services, is becoming more critical. The number of resources available on a worldwide grid is set to grow exponentially in much the same way as the number of static web pages on the ..."
Abstract
 Add to MetaCart
on the WWW. We observe that the worldwide resource discovery problem can be modelled as a slowly evolving verylarge sparsematrix where individual matrix elements represent nodes ’ knowledge of one another. Blocks in the matrix arise where nodes o↵er more than one service. Blocking e↵ects also arise
A Singular Value Thresholding Algorithm for Matrix Completion
, 2008
"... This paper introduces a novel algorithm to approximate the matrix with minimum nuclear norm among all matrices obeying a set of convex constraints. This problem may be understood as the convex relaxation of a rank minimization problem, and arises in many important applications as in the task of reco ..."
Abstract

Cited by 555 (22 self)
 Add to MetaCart
remarkable features making this attractive for lowrank matrix completion problems. The first is that the softthresholding operation is applied to a sparse matrix; the second is that the rank of the iterates {X k} is empirically nondecreasing. Both these facts allow the algorithm to make use of very minimal
The Dantzig selector: statistical estimation when p is much larger than n
, 2005
"... In many important statistical applications, the number of variables or parameters p is much larger than the number of observations n. Suppose then that we have observations y = Ax + z, where x ∈ R p is a parameter vector of interest, A is a data matrix with possibly far fewer rows than columns, n ≪ ..."
Abstract

Cited by 879 (14 self)
 Add to MetaCart
, where r is the residual vector y − A˜x and t is a positive scalar. We show that if A obeys a uniform uncertainty principle (with unitnormed columns) and if the true parameter vector x is sufficiently sparse (which here roughly guarantees that the model is identifiable), then with very large probability
Sequential minimal optimization: A fast algorithm for training support vector machines
 Advances in Kernel MethodsSupport Vector Learning
, 1999
"... This paper proposes a new algorithm for training support vector machines: Sequential Minimal Optimization, or SMO. Training a support vector machine requires the solution of a very large quadratic programming (QP) optimization problem. SMO breaks this large QP problem into a series of smallest possi ..."
Abstract

Cited by 461 (3 self)
 Add to MetaCart
possible QP problems. These small QP problems are solved analytically, which avoids using a timeconsuming numerical QP optimization as an inner loop. The amount of memory required for SMO is linear in the training set size, which allows SMO to handle very large training sets. Because matrix computation
HIERARCHICAL PROBING FOR ESTIMATING THE TRACE OF THE MATRIX INVERSE ON TOROIDAL LATTICES
"... Abstract. The standard approach for computing the trace of the inverse of a very large, sparse matrix A is to view the trace as the mean value of matrix quadratures, and use the Monte Carlo algorithm to estimate it. This approach is heavily used in our motivating application of Lattice QCD. Often, t ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Abstract. The standard approach for computing the trace of the inverse of a very large, sparse matrix A is to view the trace as the mean value of matrix quadratures, and use the Monte Carlo algorithm to estimate it. This approach is heavily used in our motivating application of Lattice QCD. Often
A St abilizcd SparseMatrix UD SquareRoot lmplmmntation of a LargeState Extcnckd Kalman Filter
"... 1. introduction and motivation ThC fu]l nonlinear Kalman filter (KI;) sequential algorithm is, ill theory, wellsuited to the fourdimensional data assimilation problem in largescale atmospheric and oceanic problems (C]hil eZ al. 1981, Ghil and MalanotteI<i z?.oli 199 1). Soon after Kalman’s (1 ..."
Abstract
 Add to MetaCart
1. introduction and motivation ThC fu]l nonlinear Kalman filter (KI;) sequential algorithm is, ill theory, wellsuited to the fourdimensional data assimilation problem in largescale atmospheric and oceanic problems (C]hil eZ al. 1981, Ghil and MalanotteI<i z?.oli 199 1). Soon after Kalman’s
Probabilistic Matrix Factorization
"... Many existing approaches to collaborative filtering can neither handle very large datasets nor easily deal with users who have very few ratings. In this paper we present the Probabilistic Matrix Factorization (PMF) model which scales linearly with the number of observations and, more importantly, pe ..."
Abstract

Cited by 287 (5 self)
 Add to MetaCart
Many existing approaches to collaborative filtering can neither handle very large datasets nor easily deal with users who have very few ratings. In this paper we present the Probabilistic Matrix Factorization (PMF) model which scales linearly with the number of observations and, more importantly
SPARSE MATRIX METHODS IN OPTIMIZATION
, 1984
"... Optimization algorithms typically require the solution of many systems of linear equations Bkyk b,. When large numbers of variables or constraints are present, these linear systems could account for much of the total computation time. Both direct and iterative equation solvers are needed in practi ..."
Abstract

Cited by 19 (4 self)
 Add to MetaCart
sequence of modified systems (e.g., the productform update; use of the Schur complement). The speed of factorizing a matrix then becomes relatively less important than the efficiency of subsequent solves with very many righthand sides. At the same time, we hope that future improvements to linear
Multidimensional Sparse Matrix Storage
"... Large sparse matrices play important role in many modern information retrieval methods. These methods, such as clustering, latent semantic indexing, performs huge number of computations with such matrices, thus their implementation should be very carefully designed. In this paper we discuss three im ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Large sparse matrices play important role in many modern information retrieval methods. These methods, such as clustering, latent semantic indexing, performs huge number of computations with such matrices, thus their implementation should be very carefully designed. In this paper we discuss three
Allfrequency shadows using nonlinear wavelet lighting approximation
 ACM Transactions on Graphics
, 2003
"... We present a method, based on precomputed light transport, for realtime rendering of objects under allfrequency, timevarying illumination represented as a highresolution environment map. Current techniques are limited to small area lights, with sharp shadows, or large lowfrequency lights, with ..."
Abstract

Cited by 188 (25 self)
 Add to MetaCart
, with very soft shadows. Our main contribution is to approximate the environment map in a wavelet basis, keeping only the largest terms (this is known as a nonlinear approximation). We obtain further compression by encoding the light transport matrix sparsely but accurately in the same basis. Rendering
Results 1  10
of
370