Results 1  10
of
13
A Parallelizable Eigensolver for Real Diagonalizable Matrices with Real Eigenvalues
, 1991
"... . In this paper, preliminary research results on a new algorithm for finding all the eigenvalues and eigenvectors of a real diagonalizable matrix with real eigenvalues are presented. The basic mathematical theory behind this approach is reviewed and is followed by a discussion of the numerical consi ..."
Abstract

Cited by 30 (6 self)
 Add to MetaCart
(Show Context)
. In this paper, preliminary research results on a new algorithm for finding all the eigenvalues and eigenvectors of a real diagonalizable matrix with real eigenvalues are presented. The basic mathematical theory behind this approach is reviewed and is followed by a discussion of the numerical considerations of the actual implementation. The numerical algorithm has been tested on thousands of matrices on both a Cray2 and an IBM RS/6000 Model 580 workstation. The results of these tests are presented. Finally, issues concerning the parallel implementation of the algorithm are discussed. The algorithm's heavy reliance on matrixmatrix multiplication, coupled with the divide and conquer nature of this algorithm, should yield a highly parallelizable algorithm. 1. Introduction. Computation of all the eigenvalues and eigenvectors of a dense matrix is essential for solving problems in many fields. The everincreasing computational power available from modern supercomputers offers the potenti...
Large Scale Sparse Singular Value Computations
 International Journal of Supercomputer Applications
, 1992
"... . In this paper, we present four numerical methods for computing the singular value decomposition (SVD) of large sparse matrices on a multiprocessor architecture. We particularly emphasize Lanczos and subspace iterationbased methods for determining several of the largest singular triplets (singular ..."
Abstract

Cited by 21 (0 self)
 Add to MetaCart
(Show Context)
. In this paper, we present four numerical methods for computing the singular value decomposition (SVD) of large sparse matrices on a multiprocessor architecture. We particularly emphasize Lanczos and subspace iterationbased methods for determining several of the largest singular triplets (singular values and corresponding left and rightsingular vectors) for sparse matrices arising from two practical applications: information retrieval and seismic reflection tomography. The target architectures for our implementations of such methods are the Cray2S/4128 and Alliant FX/80. The sparse SVD problem is well motivated by recent informationretrieval techniques in which dominant singular values and their corresponding singular vectors of large sparse termdocument matrices are desired, and by nonlinear inverse problems from seismic tomography applications in which approximate pseudoinverses of large sparse Jacobian matrices are needed. It is hoped that this research will advance the dev...
Parallel performance of a symmetric eigensolver based on the invariant subspace decomposition approach
 In Scalable High Performance Computing Conference
, 1994
"... ..."
The PRISM Project: Infrastructure and Algorithms for Parallel Eigensolvers
, 1994
"... The goal of the PRISM project is the development of infrastructure and algorithms for the parallel solution of eigenvalue problems. We are currently investigating a complete eigensolver based on the Invariant Subspace Decomposition Algorithm for dense symmetric matrices (SYISDA). After briefly revie ..."
Abstract

Cited by 14 (6 self)
 Add to MetaCart
The goal of the PRISM project is the development of infrastructure and algorithms for the parallel solution of eigenvalue problems. We are currently investigating a complete eigensolver based on the Invariant Subspace Decomposition Algorithm for dense symmetric matrices (SYISDA). After briefly reviewing SYISDA, we discuss the algorithmic highlights of a distributedmemory implementation of this approach. These include a fast matrixmatrix multiplication algorithm, a new approach to parallel band reduction and tridiagonalization, and a harness for coordinating the divideandconquer parallelism in the problem. We also present performance results of these kernels as well as the overall SYISDA implementation on the Intel Touchstone Delta prototype. 1. Introduction Computation of eigenvalues and eigenvectors is an essential kernel in many applications, and several promising parallel algorithms have been investigated [29, 24, 3, 27, 21]. The work presented in this paper is part of the PRI...
A Parallel Implementation of the Invariant Subspace Decomposition Algorithm for Dense Symmetric Matrices
, 1993
"... . We give an overview of the Invariant Subspace Decomposition Algorithm for dense symmetric matrices (SYISDA) by first describing the algorithm, followed by a discussion of a parallel implementation of SYISDA on the Intel Delta. Our implementation utilizes an optimized parallel matrix multiplication ..."
Abstract

Cited by 12 (2 self)
 Add to MetaCart
. We give an overview of the Invariant Subspace Decomposition Algorithm for dense symmetric matrices (SYISDA) by first describing the algorithm, followed by a discussion of a parallel implementation of SYISDA on the Intel Delta. Our implementation utilizes an optimized parallel matrix multiplication implementation we have developed. Load balancing in the costly early stages of the algorithm is accomplished without redistribution of data between stages through the use of the block scattered decomposition. Computation of the invariant subspaces at each stage is done using a new tridiagonalization scheme due to Bischof and Sun. 1. Introduction Computation of all the eigenvalues and eigenvectors of a dense symmetric matrix is an essential kernel in many applications. The everincreasing computational power available from parallel computers offers the potential for solving much larger problems than could have been contemplated previously. Hardware scalability of parallel machines is freque...
Matrix Visualization in the Design of Numerical Algorithms
 ORSA Journal on Computing
, 1990
"... At the heart of much scientific computing are the algorithmic kernels often found in numerical software libraries. Numerical analysts and algorithm designers can be aided by various software tools in the design of their algorithms. We present a tool for matrix visualization and its application in th ..."
Abstract

Cited by 10 (3 self)
 Add to MetaCart
(Show Context)
At the heart of much scientific computing are the algorithmic kernels often found in numerical software libraries. Numerical analysts and algorithm designers can be aided by various software tools in the design of their algorithms. We present a tool for matrix visualization and its application in the design and development of numerical algorithms for supercomputers. We discuss the development of the tool as an objectoriented distributed system and show examples of its use, including applications in linear algebra and performance monitoring. By using color computer graphics, one can gain insights into algorithm behavior, which can then be used to design more efficient numerical algorithms. Specific use in the development of hybrid parallel algorithms for the singular value decomposition is highlighted. Subject Categories/Phrases: Computer Science/interactive computer graphics for algorithm design, Mathematics/use of visualization in numerical analysis applications, Analysis of Algorith...
Multiprocessor Sparse Svd Algorithms And Applications
, 1991
"... this memory is statically allocated, whereas on the Alliant FX/80 it is dynamically allocated as needed. On the Cray2S/4128, the vector z would be both retrieved from and written to core memory. However, on the Alliant FX/80, z may be fetched and held in the 512 kilobyte cache. Since memory accesse ..."
Abstract

Cited by 9 (3 self)
 Add to MetaCart
this memory is statically allocated, whereas on the Alliant FX/80 it is dynamically allocated as needed. On the Cray2S/4128, the vector z would be both retrieved from and written to core memory. However, on the Alliant FX/80, z may be fetched and held in the 512 kilobyte cache. Since memory accesses from the cache (fast local memory) can almost twice as fast as those from the larger globallyshared memory, we achieve an overall higher computational rate for multiplication by A
Parallel Rare Term Vector Replacement: Fast and Effective Dimensionality Reduction for Text (Abridged)
"... Dimensionality reduction is an established area in text mining and information retrieval. These methods convert the highly sparse corpus matrices into dense matrix format while preserving or improving the classification accuracy or retrieval performance. In this paper, we describe a novel approach t ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
(Show Context)
Dimensionality reduction is an established area in text mining and information retrieval. These methods convert the highly sparse corpus matrices into dense matrix format while preserving or improving the classification accuracy or retrieval performance. In this paper, we describe a novel approach to dimensionality reduction for text, along with a parallel algorithm suitable for private memory parallel computer systems. According to Zipf’s law, the majority of indexing terms occurs only in a small number of documents. Our algorithm replaces rare terms by computing a vector which expresses their semantics in terms of common terms. This process produces a projection matrix, which can be applied to a corpus matrix and individual document and query vectors. We give an accurate mathematical and algorithmic description of our algorithms and present an experimental evaluation on two benchmark corpora. These experiments indicate that our algorithm can deliver a substantial reduction in the number of features, from 47,236 to 392 features on the Reuters corpus with a clear improvement in the retrieval performance. We have evaluated our parallel implementation using the message passing interface with up to 32 processes on a Nehalem Xeon cluster, computing the projection matrix for the dimensionality reduction for over 800,000 documents in just under 100 seconds. This is a strongly abridged version of the article “Tobias Berka and Marian Vajterˇsic: Parallel Rare Term Vector Replacement: Fast and Effective Dimensionality Reduction for Text. Journal of Parallel and Distributed Computing, 2012 (to appear).”