Results 1  10
of
24
A Parallelizable Eigensolver for Real Diagonalizable Matrices with Real Eigenvalues
, 1991
"... . In this paper, preliminary research results on a new algorithm for finding all the eigenvalues and eigenvectors of a real diagonalizable matrix with real eigenvalues are presented. The basic mathematical theory behind this approach is reviewed and is followed by a discussion of the numerical consi ..."
Abstract

Cited by 26 (6 self)
 Add to MetaCart
. In this paper, preliminary research results on a new algorithm for finding all the eigenvalues and eigenvectors of a real diagonalizable matrix with real eigenvalues are presented. The basic mathematical theory behind this approach is reviewed and is followed by a discussion of the numerical considerations of the actual implementation. The numerical algorithm has been tested on thousands of matrices on both a Cray2 and an IBM RS/6000 Model 580 workstation. The results of these tests are presented. Finally, issues concerning the parallel implementation of the algorithm are discussed. The algorithm's heavy reliance on matrixmatrix multiplication, coupled with the divide and conquer nature of this algorithm, should yield a highly parallelizable algorithm. 1. Introduction. Computation of all the eigenvalues and eigenvectors of a dense matrix is essential for solving problems in many fields. The everincreasing computational power available from modern supercomputers offers the potenti...
Algorithms and Architectures for Channel Estimation in Wireless CDMA Communication Systems
, 1998
"... Wireless cellular communication is witnessing a rapid growth in markets, technology, and range of services. An attractive approach for economical, spectrally efficient, and high quality digital cellular and personal communication services is the use of code division multiple access (CDMA) technology ..."
Abstract

Cited by 13 (0 self)
 Add to MetaCart
Wireless cellular communication is witnessing a rapid growth in markets, technology, and range of services. An attractive approach for economical, spectrally efficient, and high quality digital cellular and personal communication services is the use of code division multiple access (CDMA) technology. The estimation of channel delays along with channel attenuation and phases of different users constitutes the first stage in the detection process at the receiving base station in a CDMA communication system. This stage, called channel parameter estimation, forms the bottleneck for the detection of users' bitstreams; both in terms of accuracy as well as execution time. In this thesis, we develop new algorithms and architectures to solve the CDMA channel estimation problem. We have
The PRISM Project: Infrastructure and Algorithms for Parallel Eigensolvers
, 1994
"... The goal of the PRISM project is the development of infrastructure and algorithms for the parallel solution of eigenvalue problems. We are currently investigating a complete eigensolver based on the Invariant Subspace Decomposition Algorithm for dense symmetric matrices (SYISDA). After briefly revie ..."
Abstract

Cited by 12 (6 self)
 Add to MetaCart
The goal of the PRISM project is the development of infrastructure and algorithms for the parallel solution of eigenvalue problems. We are currently investigating a complete eigensolver based on the Invariant Subspace Decomposition Algorithm for dense symmetric matrices (SYISDA). After briefly reviewing SYISDA, we discuss the algorithmic highlights of a distributedmemory implementation of this approach. These include a fast matrixmatrix multiplication algorithm, a new approach to parallel band reduction and tridiagonalization, and a harness for coordinating the divideandconquer parallelism in the problem. We also present performance results of these kernels as well as the overall SYISDA implementation on the Intel Touchstone Delta prototype. 1. Introduction Computation of eigenvalues and eigenvectors is an essential kernel in many applications, and several promising parallel algorithms have been investigated [29, 24, 3, 27, 21]. The work presented in this paper is part of the PRI...
A Parallel Implementation of the Invariant Subspace Decomposition Algorithm for Dense Symmetric Matrices
, 1993
"... . We give an overview of the Invariant Subspace Decomposition Algorithm for dense symmetric matrices (SYISDA) by first describing the algorithm, followed by a discussion of a parallel implementation of SYISDA on the Intel Delta. Our implementation utilizes an optimized parallel matrix multiplication ..."
Abstract

Cited by 12 (2 self)
 Add to MetaCart
. We give an overview of the Invariant Subspace Decomposition Algorithm for dense symmetric matrices (SYISDA) by first describing the algorithm, followed by a discussion of a parallel implementation of SYISDA on the Intel Delta. Our implementation utilizes an optimized parallel matrix multiplication implementation we have developed. Load balancing in the costly early stages of the algorithm is accomplished without redistribution of data between stages through the use of the block scattered decomposition. Computation of the invariant subspaces at each stage is done using a new tridiagonalization scheme due to Bischof and Sun. 1. Introduction Computation of all the eigenvalues and eigenvectors of a dense symmetric matrix is an essential kernel in many applications. The everincreasing computational power available from parallel computers offers the potential for solving much larger problems than could have been contemplated previously. Hardware scalability of parallel machines is freque...
Trading off Parallelism and Numerical Stability
, 1992
"... The fastest parallel algorithm for a problem may be significantly less stable numerically than the fastest serial algorithm. We illustrate this phenomenon by a series of examples drawn from numerical linear algebra. We also show how some of these instabilities may be mitigated by better floating poi ..."
Abstract

Cited by 12 (5 self)
 Add to MetaCart
The fastest parallel algorithm for a problem may be significantly less stable numerically than the fastest serial algorithm. We illustrate this phenomenon by a series of examples drawn from numerical linear algebra. We also show how some of these instabilities may be mitigated by better floating point arithmetic.
Lowcomplexity principal component analysis for hyperspectral image compression
 Int. J. High Performance Comput. Appl
, 2008
"... Abstractâ€”Principal component analysis (PCA) is an effective tool for spectral decorrelation of hyperspectral imagery, and PCAbased spectral transforms have been employed successfully in conjunction with JPEG2000 for hyperspectralimage compression. However, the computational cost of determining the ..."
Abstract

Cited by 9 (3 self)
 Add to MetaCart
Abstractâ€”Principal component analysis (PCA) is an effective tool for spectral decorrelation of hyperspectral imagery, and PCAbased spectral transforms have been employed successfully in conjunction with JPEG2000 for hyperspectralimage compression. However, the computational cost of determining the datadependent PCA transform is high due to its traditional eigendecomposition implementation which requires calculation of a covariance matrix across the data. Several strategies for reducing the computation burden of PCA are explored, including both spatial and spectral subsampling in the covariance calculation as well as an iterative algorithm that circumvents determination of the covariance matrix entirely. Experimental results investigate the impacts of such lowcomplexity PCA on JPEG2000 compression of hyperspectral images, focusing on ratedistortion performance as well as dataanalysis performance at an anomalydetection task. Index Termsâ€”principal component analysis, hyperspectral image compression, JPEG2000, spectral decorrelation, anomaly detection I.
A parallel algorithm for the eigenvalues and eigenvectors of a general complex matrix
 Num. Math
, 1991
"... A new parallel Jacobilike algorithm is developed for computing the eigenvalues of a general complex matrix. Most parallel methods for this parallel typically display only linear convergence. Sequential 'normreducing ' algorithms also exit and they display quadratic convergence in most ca ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
A new parallel Jacobilike algorithm is developed for computing the eigenvalues of a general complex matrix. Most parallel methods for this parallel typically display only linear convergence. Sequential 'normreducing ' algorithms also exit and they display quadratic convergence in most cases. The new algorithm is a parallel form of the 'normreducing ' algorithm due to Eberlein. It is proven that the asymptotic convergence rate of this algorithm is quadratic. Numerical experiments are presented which demonstrate the quadratic convergence of the algorithm and certain situations where the convergence is slow are also identified. The algorithm promises to be very competitive on a variety of parallel architectures. In particular, the algorithm can he
On Parallel Implementation of the Onesided Jacobi Algorithm for Singular Value Decompositions
"... 1 ..."
Solving the SVD updating problem for subspace tracking on a fixed sized linear array of processors
, 1997
"... This paper proposes a parallel scheme for SVD updating that can be implemented on a #xed size array of o#theshelf processors ..."
Abstract

Cited by 6 (3 self)
 Add to MetaCart
This paper proposes a parallel scheme for SVD updating that can be implemented on a #xed size array of o#theshelf processors