Results 1  10
of
20
TMG: A MATLAB Toolbox for Generating TermDocument Matrices from Text Collections
, 2005
"... A wide range of computational kernels in data mining and information retrieval from text collections involve techniques from linear algebra. These kernels typically operate on data that is presented in the form of large sparse termdocument matrices (tdm). We present TMG, a research and teaching too ..."
Abstract

Cited by 30 (2 self)
 Add to MetaCart
A wide range of computational kernels in data mining and information retrieval from text collections involve techniques from linear algebra. These kernels typically operate on data that is presented in the form of large sparse termdocument matrices (tdm). We present TMG, a research and teaching toolbox for the generation of sparse tdm’s from text collections and for the incremental modification of these tdm’s by means of additions or deletions. The toolbox is written entirely in MATLAB, a popular problem solving environment that is powerful in computational linear algebra, in order to streamline document preprocessing and prototyping of algorithms for information retrieval. Several design issues that concern the use of MATLAB sparse infrastructure and data structures are addressed. We illustrate the use of the tool in numerical explorations of the effect of stemming and different termweighting policies on the performance of querying and clustering tasks.
Augmented implicitly restarted Lanczos bidiagonalization methods
 SIAM J. Sci. Comput
"... Abstract. New restarted Lanczos bidiagonalization methods for the computation of a few of the largest or smallest singular values of a large matrix are presented. Restarting is carried out by augmentation of Krylov subspaces that arise naturally in the standard Lanczos bidiagonalization method. The ..."
Abstract

Cited by 12 (6 self)
 Add to MetaCart
Abstract. New restarted Lanczos bidiagonalization methods for the computation of a few of the largest or smallest singular values of a large matrix are presented. Restarting is carried out by augmentation of Krylov subspaces that arise naturally in the standard Lanczos bidiagonalization method. The augmenting vectors are associated with certain Ritz or harmonic Ritz vectors. Computed examples show the new methods to be competitive with available schemes. Key words. singular value computation, partial singular value decomposition, iterative method, largescale computation
Computing Smallest Singular Triplets with Implicitly Restarted Lanczos Bidiagonalization
 APPL. NUMER. MATH
, 2004
"... A matrixfree algorithm, IRLANB, for the efficient computation of the smallest singular triplets of large and possibly sparse matrices is described. Key characteristics of the approach are its use of Lanczos bidiagonalization, implicit restarting, and harmonic Ritz values. The algorithm also uses a ..."
Abstract

Cited by 12 (2 self)
 Add to MetaCart
A matrixfree algorithm, IRLANB, for the efficient computation of the smallest singular triplets of large and possibly sparse matrices is described. Key characteristics of the approach are its use of Lanczos bidiagonalization, implicit restarting, and harmonic Ritz values. The algorithm also uses a deflation stategy that can be applied directly on Lanczos bidiagonalization. A refinenement postprocessing phase is applied on the converged singular vectors. The computational costs of the above techniques are kept small as they make direct use of the bidiagonal form obtained in the course of the Lanczos factorization. Several numerical experiments with the method are presented that illustrate its effectiveness and indicate that it performs well compared to existing codes.
Restarted block Lanczos bidiagonalization methods, Numer. Algorithms
"... Abstract. The problem of computing a few of the largest or smallest singular values and associated singular vectors of a large matrix arises in many applications. This paper describes restarted block Lanczos bidiagonalization methods based on augmentation of Ritz vectors or harmonic Ritz vectors by ..."
Abstract

Cited by 7 (4 self)
 Add to MetaCart
Abstract. The problem of computing a few of the largest or smallest singular values and associated singular vectors of a large matrix arises in many applications. This paper describes restarted block Lanczos bidiagonalization methods based on augmentation of Ritz vectors or harmonic Ritz vectors by block Krylov subspaces. Key words. partial singular value decomposition, restarted iterative method, implicit shifts, augmentation. AMS subject classifications. 65F15, 15A18
Model Order and Terminal Reduction Approaches via Matrix Decomposition and Low Rank Approximation
"... Abstract We discuss methods for model order reduction (MOR) of linear systems with many input and output variables, arising in the modeling of linear (sub) circuits with a huge number of nodes and a large number of terminals, like power grids. Our work is based on the approaches SVDMOR and ESVDMOR p ..."
Abstract

Cited by 3 (3 self)
 Add to MetaCart
Abstract We discuss methods for model order reduction (MOR) of linear systems with many input and output variables, arising in the modeling of linear (sub) circuits with a huge number of nodes and a large number of terminals, like power grids. Our work is based on the approaches SVDMOR and ESVDMOR proposed in recent publications [1–5]. In particular, we discuss efficient numerical algorithms for their implementation. Only by using efficient tools from numerical linear algebra, these methods become applicable for truly largescale problems. 1
On Stability, Passivity and Reciprocity Preservation of ESVDMOR
"... Abstract The reduction of parasitic linear subcircuits is one of many issues in model order reduction (MOR) for VLSI design. This issue is well explored, but recently the incorporation of subcircuits from different modelling sources into the circuit model has led to new structural aspects: so far, t ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
Abstract The reduction of parasitic linear subcircuits is one of many issues in model order reduction (MOR) for VLSI design. This issue is well explored, but recently the incorporation of subcircuits from different modelling sources into the circuit model has led to new structural aspects: so far, the number of elements in the subcircuits was significantly larger than the number of connections to the whole circuit, the so called pins or terminals. This assumption is no longer valid in all cases such that the simulation of these circuits or rather the reduction of the model requires new methods. In [6, 15, 17], the extended singular value decomposition based model order reduction (ESVDMOR) algorithm is introduced as a way to handle this kind of circuits with a massive number of terminals. Unfortunately, the ESVDMOR approach has some drawbacks because it uses the SVD for matrix factorizations. In [5, 22] the truncated SVD (TSVD) as an alternative to the SVD within the ESVDMOR is introduced. In this paper we show that ESVDMOR as well as the modified approach is stability, passivity, and reciprocity preserving under reasonable assumptions. 1
A Jacobi–Davidson type method for the product eigenvalue problem 1
"... We propose a Jacobi–Davidson type technique to compute selected eigenpairs of the product eigenvalue problem Am · · · A1x = λx, where the matrices may be large and sparse. To avoid difficulties caused by a high condition number of the product matrix, we split up the action of the product matrix an ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
We propose a Jacobi–Davidson type technique to compute selected eigenpairs of the product eigenvalue problem Am · · · A1x = λx, where the matrices may be large and sparse. To avoid difficulties caused by a high condition number of the product matrix, we split up the action of the product matrix and work with several search spaces. We generalize the Jacobi–Davidson correction equation, and the harmonic and refined extraction for the product eigenvalue problem. Numerical experiments indicate that the method can be used to compute eigenvalues of product matrices with extremely high condition numbers. Key words: Product eigenvalue problem, product SVD (PSVD), subspace method, Jacobi–Davidson, correction equation, cyclic matrix, cyclic eigenvalue problem, harmonic extraction, refined extraction. 1
Pseudospectra Computation of Large Matrices
, 2004
"... Transfer functions have been shown to provide monotonic approximations to the resolvent 2norm of A, R(z) = (A − zI) −1, when associated with a sequence of nested spaces. This paper addresses the open question of the effectiveness of the transfer function scheme for the computation of the pseudospe ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Transfer functions have been shown to provide monotonic approximations to the resolvent 2norm of A, R(z) = (A − zI) −1, when associated with a sequence of nested spaces. This paper addresses the open question of the effectiveness of the transfer function scheme for the computation of the pseudospectrum of large matrices. It is shown that the scheme can be combined with certain Krylov type linear solvers, such as restarted fom, for the efficient solution of shifted linear systems of the form (A − zkI) −1 b, for a large number of shifts zk. Extensive numerical experiments illustrate the performance of the methods developed in this paper. Tools for the effective combination of the transfer function framework with path following methods are developed. A hybrid method is proposed that combines transfer functions with iterative solvers and path following and is shown to be a powerful and cost effective scheme for computing pseudospectra of very large matrices.
A Jacobi–Davidson type method for the generalized singular value problem
, 2004
"... Abstract. We discuss a new method for the iterative computation of some of the generalized singular values and vectors of a large sparse matrix. Our starting point is the augmented matrix formulation of the GSVD. The subspace expansion is performed by (approximately) solving a Jacobi–Davidson type c ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
Abstract. We discuss a new method for the iterative computation of some of the generalized singular values and vectors of a large sparse matrix. Our starting point is the augmented matrix formulation of the GSVD. The subspace expansion is performed by (approximately) solving a Jacobi–Davidson type correction equation, while we give several alternatives for the subspace extraction. Numerical experiments indicate the efficiency of the method. Key words. Generalized singular value decomposition (GSVD), partial GSVD, Jacobi–Davidson, subspace method, augmented matrix, correction equation, (inexact) accelerated Newton, refined extraction, harmonic extraction. AMS subject classifications. 65F15, 65F50, (65F30). 1. Introduction. The generalized singular value decomposition (GSVD) was introduced by Van Loan [15] and further developed by Paige and Saunders [9]. Let A ∈ R m×n and B ∈ R p×n be given. The generalized singular values of the pair (A, B) are [15, Def. 1] Σ(A, B) = {σ ≥ 0  A T A − σ 2 B T B singular}. The (diagonal form of the) GSVD of A and B is given by [15, Th. 2], [9, p. 399]
APPROXIMATIONS FROM SUBSPACES FOR THE SINGULAR VALUE PROBLEM
"... harmonic singular triple, bounds The computation (or approximation) of some of the smallest or largest singular values of a large sparse matrix is a big challenge, having many important applications. In this talk we discuss approximations to singular triples that can be obtained from subspaces. One ..."
Abstract
 Add to MetaCart
harmonic singular triple, bounds The computation (or approximation) of some of the smallest or largest singular values of a large sparse matrix is a big challenge, having many important applications. In this talk we discuss approximations to singular triples that can be obtained from subspaces. One of the topics is the concept of harmonic singular triples, introduced in [1]. The word “harmonic ” expresses the fact that we try to approximate the smallest singular value by approximating the largest singular value of the inverse of the matrix. We give a definition, some properties, and show how harmonic singular values can be used in subspace methods. Some applications are given.