Results 1  10
of
28
Accurate Singular Values of Bidiagonal Matrices
 SIAM J. SCI. STAT. COMPUT
, 1990
"... Computing the singular values of a bidiagonal matrix is the fin al phase of the standard algow rithm for the singular value decomposition of a general matrix. We present a new algorithm hich computes all the singular values of a bidiagonal matrix to high relative accuracy independent of their magni ..."
Abstract

Cited by 108 (17 self)
 Add to MetaCart
Computing the singular values of a bidiagonal matrix is the fin al phase of the standard algow rithm for the singular value decomposition of a general matrix. We present a new algorithm hich computes all the singular values of a bidiagonal matrix to high relative accuracy independent of their magnitudes. In contrast, the standard algorithm for bidiagonal matrices may compute small singular values with no relative accuracy at all. Numerical experiments show that the new algorithm is comparable in speed to the standard algorithm , and frequently faster.
Computing Accurate Eigensystems of Scaled Diagonally Dominant Matrices
, 1980
"... When computing eigenvalues of sym metric matrices and singular values of general matrices in finite precision arithmetic we in general only expect to compute them with an error bound proportional to the product of machine precision and the norm of the matrix. In particular, we do not expect to comp ..."
Abstract

Cited by 83 (14 self)
 Add to MetaCart
(Show Context)
When computing eigenvalues of sym metric matrices and singular values of general matrices in finite precision arithmetic we in general only expect to compute them with an error bound proportional to the product of machine precision and the norm of the matrix. In particular, we do not expect to compute tiny eigenvalues and singular values to high relative accuracy. There are some important classes of matrices where we can do much better, including bidiagonal matrices, scaled diagonally dominant matrices, and scaled diagonally dominant definite pencils. These classes include many graded matrices, and all sym metric positive definite matrices which can be consistently ordered (and thus all symmetric positive definite tridiagonal matrices). In particular, the singular values and eigenvalues are determined to high relative precision independent of their magnitudes, and there are algorithms to compute them this accurately. The eigenvectors are also determined more accurately than for general matrices, and may be computed more accurately as well. This work extends results of Kahan and Demmel for bidiagonal and tridiagonal matrices.
A Stable And Fast Algorithm For Updating The Singular Value Decomposition
, 1994
"... . Let A 2 R m\Thetan be a matrix with known singular values and singular vectors, and let A 0 be the matrix obtained by appending a row to A. We present stable and fast algorithms for computing the singular values and the singular vectors of A 0 in O \Gamma (m + n) min(m;n) log 2 2 ffl \De ..."
Abstract

Cited by 52 (2 self)
 Add to MetaCart
(Show Context)
. Let A 2 R m\Thetan be a matrix with known singular values and singular vectors, and let A 0 be the matrix obtained by appending a row to A. We present stable and fast algorithms for computing the singular values and the singular vectors of A 0 in O \Gamma (m + n) min(m;n) log 2 2 ffl \Delta floating point operations, where ffl is the machine precision. Previous algorithms can be unstable and compute the singular values and the singular vectors of A 0 in O \Gamma (m + n) min 2 (m;n) \Delta floating point operations. 1. Introduction. The singular value decomposition (SVD) of a matrix A 2 R m\Thetan is A = U\Omega V T ; (1.1) where U 2 R m\Thetam and V 2 R n\Thetan are orthonormal; and\Omega 2 R m\Thetan is zero except on the main diagonal, which has nonnegative entries in decreasing order. The columns of U and V are the left singular vectors and the right singular vectors of A, respectively; the diagonal entries of\Omega are the singular values of A....
A singularly valuable decomposition: The SVD of a matrix
 College Math Journal
, 1996
"... Every teacher of linear algebra should be familiar with the matrix singular value decomposition (or SVD). It has interesting and attractive algebraic properties, and conveys important geometrical and theoretical insights about linear transformations. The close connection between the SVD and the well ..."
Abstract

Cited by 21 (0 self)
 Add to MetaCart
(Show Context)
Every teacher of linear algebra should be familiar with the matrix singular value decomposition (or SVD). It has interesting and attractive algebraic properties, and conveys important geometrical and theoretical insights about linear transformations. The close connection between the SVD and the well known theory of diagonalization for symmetric matrices makes the topic immediately accessible to linear algebra teachers, and indeed, a natural extension of what these teachers already know. At the same time, the SVD has fundamental importance in several different applications of linear algebra. Strang was aware of these facts when he introduced the SVD in his now classical text [22, page 142], observing...it is not nearly as famous as it should be. Golub and Van Loan ascribe a central significance to the SVD in their definitive explication of numerical matrix methods [8, page xiv] stating...perhaps the most recurring theme in the book is the practical and theoretical value of [the SVD]. Additional evidence of the significance of the SVD is its central role in a number of papers in recent years in Mathematics Magazine and The American Mathematical Monthly (for example [2, 3, 17, 23]). Although it is probably not feasible to include the SVD in the first linear algebra course, it definitely deserves a place in more advanced undergraduate courses, particularly those with a numerical or applied emphasis. My primary goals in this article are to bring the topic to the attention of a broad audience,
A Parallel Divide and Conquer Algorithm for the Symmetric Eigenvalue Problem on Distributed Memory Architectures
, 1999
"... ..."
in A Practical Approach to Microarray Data Analysis
 Kluwel. chapter
, 2003
"... 5. Singular value decomposition and principal component analysis 1 ..."
Abstract

Cited by 11 (0 self)
 Add to MetaCart
(Show Context)
5. Singular value decomposition and principal component analysis 1
High Performance Bidiagonal Reduction using Tile Algorithms on Homogeneous Multicore Architectures
"... Abstract—This paper presents a new high performance bidiagonal reduction (BRD) on homogeneous multicore architectures. This paper is an extension of the high performance tridiagonal reduction implemented by the same authors (Luszczek et al., IPDPS 2011) to the BRD case. The BRD is the first step tow ..."
Abstract

Cited by 7 (3 self)
 Add to MetaCart
(Show Context)
Abstract—This paper presents a new high performance bidiagonal reduction (BRD) on homogeneous multicore architectures. This paper is an extension of the high performance tridiagonal reduction implemented by the same authors (Luszczek et al., IPDPS 2011) to the BRD case. The BRD is the first step toward computing the singular value decomposition of a matrix which is one of the most important algorithms in numerical linear algebra due to its broad impact in computational science. The high performance of the BRD described in this paper comes from the combination of four important features: (1) tile algorithms with tile data layout which provide an efficient data representation in main memory, (2) a twostage reduction approach which allows to cast most of the computation during the first stage (reduction to band form) into calls to Level 3 BLAS and reduces the memory traffic during the second
An Efficient and Accurate Parallel Algorithm for the Singular Value Problem of Bidiagonal Matrices
 Numer. Math
"... In this paper we propose an algorithm based on Laguerre's iteration, rank two divideandconquer technique and a hybrid strategy for computing singular values of bidiagonal matrices. The algorithm is fully parallel in nature and evaluates singular values to tiny relative error if necessary. It ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
(Show Context)
In this paper we propose an algorithm based on Laguerre's iteration, rank two divideandconquer technique and a hybrid strategy for computing singular values of bidiagonal matrices. The algorithm is fully parallel in nature and evaluates singular values to tiny relative error if necessary. It is competitive with QR algorithm in serial mode in speed and advantageous in computing partial singular values. Error analysis and numerical results are presented. Subject Classifications: AMS (MOS): 65F15; CR: 5.14 Contents 1 Introduction 1 2 The splitmerge algorithm 3 3 Perturbation theory 9 4 A hybrid algorithm 10 4.1 The algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 4.2 The stopping criterion for Laguerre's iteration . . . . . . . . . . . . . . . 12 4.3 On the parallelization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 4.4 Proof of Proposition 4.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 5 Numerical experiments 15 This re...
A.: Multiple Document Summarization Using Principal Component Analysis Incorporating Semantic Vector Space Model
 Computational Linguistics and Chinese Language Processing
, 2008
"... Text Summarization is very effective in relevant assessment tasks. The Multiple Document Summarizer presents a novel approach to select sentences from documents according to several heuristic features. Summaries are generated modeling the set of documents as Semantic Vector Space Model (SVSM) and ap ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
Text Summarization is very effective in relevant assessment tasks. The Multiple Document Summarizer presents a novel approach to select sentences from documents according to several heuristic features. Summaries are generated modeling the set of documents as Semantic Vector Space Model (SVSM) and applying Principal Component Analysis (PCA) to extract topic features. Pure Statistical VSM assumes terms to be independent of each other and may result in inconsistent results. Vector space is enhanced semantically by modifying the weight of the word vector governed by Appearance and Disappearance (Action class) words. The knowledge base for Action words is maintained by classifying the words as Appearance or Disappearance with the help of Wordnet. The weights of the action words are modified in accordance with the Object list prepared by the collection of nouns corresponding to the action words. Summary thus generated provides more informative content as semantics of natural language has been taken into consideration.