Results 1  10
of
760
1 QR Factorization of Tall and Skinny Matrices in a Grid Computing Environment
"... Abstract — Previous studies have reported that common dense linear algebra operations do not achieve speed up by using multiple geographical sites of a computational grid. Because such operations are the building blocks of most scientific applications, conventional supercomputers are still strongly ..."
Abstract
 Add to MetaCart
that trade flops for communication. In this paper, we present a new approach for computing a QR factorization – one of the main dense linear algebra kernels – of tall and skinny matrices in a grid computing environment that overcomes these two bottlenecks. Our contribution is to articulate a recently
QR Factorization of Tall and Skinny Matrices in a Grid Computing Environment
"... Previous studies have reported that common dense linear algebra operations do not achieve speed up by using multiple geographical sites of a computational grid. Because such operations are the building blocks of most scientific applications, conventional supercomputers are still strongly predominant ..."
Abstract

Cited by 20 (7 self)
 Add to MetaCart
that trade flops for communication. In this paper, we present a new approach for computing a QR factorization – one of the main dense linear algebra kernels – of tall and skinny matrices in a grid computing environment that overcomes these two bottlenecks. Our contribution is to articulate a recently
Direct QR factorizations for tallandskinny matrices
 in MapReduce architectures, arXiv:1301.1071 [cs.DC], 2013
"... Abstract—The QR factorization and the SVD are two fundamental matrix decompositions with applications throughout scientific computing and data analysis. For matrices with many more rows than columns, socalled “tallandskinny matrices, ” there is a numerically stable, efficient, communicationavoi ..."
Abstract

Cited by 5 (1 self)
 Add to MetaCart
Abstract—The QR factorization and the SVD are two fundamental matrix decompositions with applications throughout scientific computing and data analysis. For matrices with many more rows than columns, socalled “tallandskinny matrices, ” there is a numerically stable, efficient, communication
and skinny qr factorizations in mapreduce architectures
 in Proceedings of the second international workshop on MapReduce and its applications
, 2011
"... The QR factorization is one of the most important and useful matrix factorizations in scientific computing. A recent communicationavoiding version of the QR factorization trades flops for messages and is ideal for MapReduce, where computationally intensive processes operate locally on subsets of ..."
Abstract

Cited by 8 (3 self)
 Add to MetaCart
of the data. We present an implementation of the tall and skinny QR (TSQR) factorization in the MapReduce framework, and we provide computational results for nearly terabytesized datasets. These tasks run in just a few minutes under a variety of parameter choices. Categories and Subject Descriptors G.1
Tall and Skinny QR Matrix Factorization Using Tile Algorithms on Multicore Architectures
"... Abstract. To exploit the potential of multicore architectures, recent dense linear algebra libraries have used tile algorithms, which consist in scheduling a Directed Acyclic Graph (DAG) of tasks of fine granularity where nodes represent tasks, either panel factorization or update of a blockcolumn, ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
column, and edges represent dependencies among them. Although past approaches already achieve high performance on moderate and large square matrices, their way of processing a panel in sequence leads to limited performance when factorizing tall and skinny matrices or small square matrices. We present a fully
ENHANCING PERFORMANCE OF TALLSKINNY QR FACTORIZATION USING FPGAS
"... Communicationavoiding linear algebra algorithms with low communication latency and high memory bandwidth requirements like TallSkinny QR factorization (TSQR) are highly appropriate for acceleration using FPGAs. TSQR parallelizes QR factorization of tallskinny matrices in a divideandconquer fashi ..."
Abstract
 Add to MetaCart
Communicationavoiding linear algebra algorithms with low communication latency and high memory bandwidth requirements like TallSkinny QR factorization (TSQR) are highly appropriate for acceleration using FPGAs. TSQR parallelizes QR factorization of tallskinny matrices in a divideand
Communicationavoiding QR decomposition for
 GPU,” GPU Technology Conference, Research Poster A01
, 2010
"... Abstract—We describe an implementation of the CommunicationAvoiding QR (CAQR) factorization that runs entirely on a single graphics processor (GPU). We show that the reduction in memory traffic provided by CAQR allows us to outperform existing parallel GPU implementations of QR for a large class of ..."
Abstract

Cited by 19 (2 self)
 Add to MetaCart
of tallskinny matrices. Other GPU implementations of QR handle panel factorizations by either sending the work to a generalpurpose processor or using entirely bandwidthbound operations, incurring data transfer overheads. In contrast, our QR is done entirely on the GPU using computebound kernels
Scalable Methods for Nonnegative Matrix Factorizations of Nearseparable Tallandskinny Matrices
"... Numerous algorithms are used for nonnegative matrix factorization under the assumption that the matrix is nearly separable. In this paper, we show how to make these algorithms scalable for data matrices that have many more rows than columns, socalled “tallandskinny matrices. ” One key component ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Numerous algorithms are used for nonnegative matrix factorization under the assumption that the matrix is nearly separable. In this paper, we show how to make these algorithms scalable for data matrices that have many more rows than columns, socalled “tallandskinny matrices. ” One key component
Model reduction with mapreduceenabled tall and skinny singular value decomposition
 SIAM Journal on Scientific Computing
"... Abstract. We present a method for computing reducedorder models of parameterized partial differential equation solutions. The key analytical tool is the singular value expansion of the parameterized solution, which we approximate with a singular value decomposition of a parameter snapshot matrix. ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
Abstract. We present a method for computing reducedorder models of parameterized partial differential equation solutions. The key analytical tool is the singular value expansion of the parameterized solution, which we approximate with a singular value decomposition of a parameter snapshot matrix
Scalable Methods for Nonnegative Matrix Factorizations of Nearseparable Tallandskinny Matrices
"... • NMF Problem: X ∈ Rm×n+ is a matrix with nonnegative entries, and we want to compute a nonnegative matrix factorization (NMF) X = WH, where W ∈ Rm×r+ and H ∈ Rr×n+. When r < m, this problem is NPhard. • A separable matrix is one that admits a nonnegative factorization where ..."
Abstract
 Add to MetaCart
• NMF Problem: X ∈ Rm×n+ is a matrix with nonnegative entries, and we want to compute a nonnegative matrix factorization (NMF) X = WH, where W ∈ Rm×r+ and H ∈ Rr×n+. When r < m, this problem is NPhard. • A separable matrix is one that admits a nonnegative factorization where
Results 1  10
of
760