Results 1  10
of
1,103,376
The Cache Performance and Optimizations of Blocked Algorithms
 In Proceedings of the Fourth International Conference on Architectural Support for Programming Languages and Operating Systems
, 1991
"... Blocking is a wellknown optimization technique for improving the effectiveness of memory hierarchies. Instead of operating on entire rows or columns of an array, blocked algorithms operate on submatrices or blocks, so that data loaded into the faster levels of the memory hierarchy are reused. This ..."
Abstract

Cited by 568 (5 self)
 Add to MetaCart
is highly sensitive to the stride of data accesses and the size of the blocks, and can cause wide variations in machine performance for different matrix sizes. The conventional wisdom of trying to use the entire cache, or even a fixed fraction of the cache, is incorrect. If a fixed block size is used for a
Shape and motion from image streams under orthography: a factorization method
 INTERNATIONAL JOURNAL OF COMPUTER VISION
, 1992
"... Inferring scene geometry and camera motion from a stream of images is possible in principle, but is an illconditioned problem when the objects are distant with respect to their size. We have developed a factorization method that can overcome this difficulty by recovering shape and motion under orth ..."
Abstract

Cited by 1080 (38 self)
 Add to MetaCart
Inferring scene geometry and camera motion from a stream of images is possible in principle, but is an illconditioned problem when the objects are distant with respect to their size. We have developed a factorization method that can overcome this difficulty by recovering shape and motion under
Multivariable Feedback Control: Analysis
 span (B∗) und Basis B∗ = { ω1
, 2005
"... multiinput, multioutput feedback control design for linear systems using the paradigms, theory, and tools of robust control that have arisen during the past two decades. The book is aimed at graduate students and practicing engineers who have a basic knowledge of classical control design and st ..."
Abstract

Cited by 528 (24 self)
 Add to MetaCart
and statespace control theory for linear systems. A basic knowledge of matrix theory and linear algebra is required to appreciate and digest the material offered. This edition is a revised and expanded version of the first edition, which was published in 1996. The size of the
How much should we trust differencesindifferences estimates?
, 2003
"... Most papers that employ DifferencesinDifferences estimation (DD) use many years of data and focus on serially correlated outcomes but ignore that the resulting standard errors are inconsistent. To illustrate the severity of this issue, we randomly generate placebo laws in statelevel data on femal ..."
Abstract

Cited by 777 (1 self)
 Add to MetaCart
into account the autocorrelation of the data) works well when the number of states is large enough. Two corrections based on asymptotic approximation of the variancecovariance matrix work well for moderate numbers of states and one correction that collapses the time series information into a “pre” and “post
Stable Signal Recovery from Incomplete and Inaccurate Measurements
, 2006
"... Suppose we wish to recover a vector x0 ∈ Rm (e.g., a digital signal or image) from incomplete and contaminated observations y = Ax0 + e; A is an n × m matrix with far fewer rows than columns (n m) and e is an error term. Is it possible to recover x0 accurately based on the data y? To recover x0, we ..."
Abstract

Cited by 1363 (38 self)
 Add to MetaCart
Suppose we wish to recover a vector x0 ∈ Rm (e.g., a digital signal or image) from incomplete and contaminated observations y = Ax0 + e; A is an n × m matrix with far fewer rows than columns (n m) and e is an error term. Is it possible to recover x0 accurately based on the data y? To recover x0
Sequential minimal optimization: A fast algorithm for training support vector machines
 Advances in Kernel MethodsSupport Vector Learning
, 1999
"... This paper proposes a new algorithm for training support vector machines: Sequential Minimal Optimization, or SMO. Training a support vector machine requires the solution of a very large quadratic programming (QP) optimization problem. SMO breaks this large QP problem into a series of smallest possi ..."
Abstract

Cited by 451 (3 self)
 Add to MetaCart
possible QP problems. These small QP problems are solved analytically, which avoids using a timeconsuming numerical QP optimization as an inner loop. The amount of memory required for SMO is linear in the training set size, which allows SMO to handle very large training sets. Because matrix computation
Using the Nyström Method to Speed Up Kernel Machines
 Advances in Neural Information Processing Systems 13
, 2001
"... A major problem for kernelbased predictors (such as Support Vector Machines and Gaussian processes) is that the amount of computation required to find the solution scales as O(n ), where n is the number of training examples. We show that an approximation to the eigendecomposition of the Gram matrix ..."
Abstract

Cited by 414 (6 self)
 Add to MetaCart
matrix can be computed by the Nyström method (which is used for the numerical solution of eigenproblems). This is achieved by carrying out an eigendecomposition on a smaller system of size m < n, and then expanding the results back up to n dimensions. The computational complexity of a predictor using
Spectral clustering for a large data set by reducing the similarity matrix size
 In Proc. of the 6th Int. Conf. on Language Resources and Evaluation (LREC
, 2008
"... Spectral clustering is a powerful clustering method for document data set. However, spectral clustering needs to solve an eigenvalue problem of the matrix converted from the similarity matrix corresponding to the data set. Therefore, it is not practical to use spectral clustering for a large data se ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
set. To overcome this problem, we propose the method to reduce the similarity matrix size. First, using kmeans, we obtain a clustering result for the given data set. From each cluster, we pick up some data, which are near to the central of the cluster. We take these data as one data. We call
The benefits of coding over routing in a randomized setting
 In Proceedings of 2003 IEEE International Symposium on Information Theory
, 2003
"... Abstract — We present a novel randomized coding approach for robust, distributed transmission and compression of information in networks. We give a lower bound on the success probability of a random network code, based on the form of transfer matrix determinant polynomials, that is tighter than the ..."
Abstract

Cited by 349 (42 self)
 Add to MetaCart
Abstract — We present a novel randomized coding approach for robust, distributed transmission and compression of information in networks. We give a lower bound on the success probability of a random network code, based on the form of transfer matrix determinant polynomials, that is tighter than
A review of algebraic multigrid
, 2001
"... Since the early 1990s, there has been a strongly increasing demand for more efficient methods to solve large sparse, unstructured linear systems of equations. For practically relevant problem sizes, classical onelevel methods had already reached their limits and new hierarchical algorithms had to b ..."
Abstract

Cited by 344 (11 self)
 Add to MetaCart
Since the early 1990s, there has been a strongly increasing demand for more efficient methods to solve large sparse, unstructured linear systems of equations. For practically relevant problem sizes, classical onelevel methods had already reached their limits and new hierarchical algorithms had
Results 1  10
of
1,103,376