Results 1  10
of
33
TMG: A MATLAB Toolbox for Generating TermDocument Matrices from Text Collections
, 2005
"... A wide range of computational kernels in data mining and information retrieval from text collections involve techniques from linear algebra. These kernels typically operate on data that is presented in the form of large sparse termdocument matrices (tdm). We present TMG, a research and teaching too ..."
Abstract

Cited by 40 (2 self)
 Add to MetaCart
A wide range of computational kernels in data mining and information retrieval from text collections involve techniques from linear algebra. These kernels typically operate on data that is presented in the form of large sparse termdocument matrices (tdm). We present TMG, a research and teaching toolbox for the generation of sparse tdm’s from text collections and for the incremental modification of these tdm’s by means of additions or deletions. The toolbox is written entirely in MATLAB, a popular problem solving environment that is powerful in computational linear algebra, in order to streamline document preprocessing and prototyping of algorithms for information retrieval. Several design issues that concern the use of MATLAB sparse infrastructure and data structures are addressed. We illustrate the use of the tool in numerical explorations of the effect of stemming and different termweighting policies on the performance of querying and clustering tasks.
Augmented implicitly restarted Lanczos bidiagonalization methods
 SIAM J. Sci. Comput
"... Abstract. New restarted Lanczos bidiagonalization methods for the computation of a few of the largest or smallest singular values of a large matrix are presented. Restarting is carried out by augmentation of Krylov subspaces that arise naturally in the standard Lanczos bidiagonalization method. The ..."
Abstract

Cited by 30 (9 self)
 Add to MetaCart
(Show Context)
Abstract. New restarted Lanczos bidiagonalization methods for the computation of a few of the largest or smallest singular values of a large matrix are presented. Restarting is carried out by augmentation of Krylov subspaces that arise naturally in the standard Lanczos bidiagonalization method. The augmenting vectors are associated with certain Ritz or harmonic Ritz vectors. Computed examples show the new methods to be competitive with available schemes. Key words. singular value computation, partial singular value decomposition, iterative method, largescale computation
Computing Smallest Singular Triplets with Implicitly Restarted Lanczos Bidiagonalization
 APPL. NUMER. MATH
, 2004
"... A matrixfree algorithm, IRLANB, for the efficient computation of the smallest singular triplets of large and possibly sparse matrices is described. Key characteristics of the approach are its use of Lanczos bidiagonalization, implicit restarting, and harmonic Ritz values. The algorithm also uses a ..."
Abstract

Cited by 24 (2 self)
 Add to MetaCart
(Show Context)
A matrixfree algorithm, IRLANB, for the efficient computation of the smallest singular triplets of large and possibly sparse matrices is described. Key characteristics of the approach are its use of Lanczos bidiagonalization, implicit restarting, and harmonic Ritz values. The algorithm also uses a deflation stategy that can be applied directly on Lanczos bidiagonalization. A refinenement postprocessing phase is applied on the converged singular vectors. The computational costs of the above techniques are kept small as they make direct use of the bidiagonal form obtained in the course of the Lanczos factorization. Several numerical experiments with the method are presented that illustrate its effectiveness and indicate that it performs well compared to existing codes.
Restarted block Lanczos bidiagonalization methods, Numer. Algorithms
"... Abstract. The problem of computing a few of the largest or smallest singular values and associated singular vectors of a large matrix arises in many applications. This paper describes restarted block Lanczos bidiagonalization methods based on augmentation of Ritz vectors or harmonic Ritz vectors by ..."
Abstract

Cited by 19 (6 self)
 Add to MetaCart
(Show Context)
Abstract. The problem of computing a few of the largest or smallest singular values and associated singular vectors of a large matrix arises in many applications. This paper describes restarted block Lanczos bidiagonalization methods based on augmentation of Ritz vectors or harmonic Ritz vectors by block Krylov subspaces. Key words. partial singular value decomposition, restarted iterative method, implicit shifts, augmentation. AMS subject classifications. 65F15, 15A18
A.: Parallel Algorithms for the Singular Value Decomposition. In: Handbook on Parallel Computing and Statistics. Volume 184 of Statistics: A Series of Textbooks and Monographs
, 2006
"... ..."
(Show Context)
A JacobiDavidson type method for a right definite twoparameter problem
 SIAM J. Matrix Anal. Appl
, 2001
"... We present a new numerical iterative method for computing selected eigenpairs of a right definite twoparameter eigenvalue problem. The method works even without good initial approximations and is able to tackle large problems that are too expensive for existing methods. The new method is similar ..."
Abstract

Cited by 9 (4 self)
 Add to MetaCart
(Show Context)
We present a new numerical iterative method for computing selected eigenpairs of a right definite twoparameter eigenvalue problem. The method works even without good initial approximations and is able to tackle large problems that are too expensive for existing methods. The new method is similar to the Jacobi–Davidson method for the eigenvalue problem. In each step, we first compute Ritz pairs of a small projected right definite twoparameter eigenvalue problem and then expand the search spaces using approximate solutions of appropriate correction equations. We present two alternatives for the correction equations, introduce a selection technique that makes it possible to compute more than one eigenpair, and give some numerical results.
Model Order and Terminal Reduction Approaches via Matrix Decomposition and Low Rank Approximation
"... Abstract We discuss methods for model order reduction (MOR) of linear systems with many input and output variables, arising in the modeling of linear (sub) circuits with a huge number of nodes and a large number of terminals, like power grids. Our work is based on the approaches SVDMOR and ESVDMOR p ..."
Abstract

Cited by 4 (3 self)
 Add to MetaCart
(Show Context)
Abstract We discuss methods for model order reduction (MOR) of linear systems with many input and output variables, arising in the modeling of linear (sub) circuits with a huge number of nodes and a large number of terminals, like power grids. Our work is based on the approaches SVDMOR and ESVDMOR proposed in recent publications [1–5]. In particular, we discuss efficient numerical algorithms for their implementation. Only by using efficient tools from numerical linear algebra, these methods become applicable for truly largescale problems. 1
On Stability, Passivity and Reciprocity Preservation of ESVDMOR
"... Abstract The reduction of parasitic linear subcircuits is one of many issues in model order reduction (MOR) for VLSI design. This issue is well explored, but recently the incorporation of subcircuits from different modelling sources into the circuit model has led to new structural aspects: so far, t ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
(Show Context)
Abstract The reduction of parasitic linear subcircuits is one of many issues in model order reduction (MOR) for VLSI design. This issue is well explored, but recently the incorporation of subcircuits from different modelling sources into the circuit model has led to new structural aspects: so far, the number of elements in the subcircuits was significantly larger than the number of connections to the whole circuit, the so called pins or terminals. This assumption is no longer valid in all cases such that the simulation of these circuits or rather the reduction of the model requires new methods. In [6, 15, 17], the extended singular value decomposition based model order reduction (ESVDMOR) algorithm is introduced as a way to handle this kind of circuits with a massive number of terminals. Unfortunately, the ESVDMOR approach has some drawbacks because it uses the SVD for matrix factorizations. In [5, 22] the truncated SVD (TSVD) as an alternative to the SVD within the ESVDMOR is introduced. In this paper we show that ESVDMOR as well as the modified approach is stability, passivity, and reciprocity preserving under reasonable assumptions. 1
On Convergence of the Inexact Rayleigh Quotient Iteration with the Lanczos Method Used for Solving Linear Systems ∗
, 906
"... For the Hermitian inexact Rayleigh quotient iteration (RQI), we present new general convergence results, independent of iterative solvers for inner linear systems. We prove that the method converges quadratically under a new condition, called the uniform positiveness condition. This condition is muc ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
(Show Context)
For the Hermitian inexact Rayleigh quotient iteration (RQI), we present new general convergence results, independent of iterative solvers for inner linear systems. We prove that the method converges quadratically under a new condition, called the uniform positiveness condition. This condition is much weaker than the commonly used one for quadratic convergence that, at outer iteration k, requires the relative residual norm ξk (inner tolerance or accuracy) of the inner linear system to be smaller than one considerably and may allow ξk ≥ 1. Our focus is on the inexact RQI with the Lanczos method used for solving the linear systems. We derive some attractive properties of the residuals obtained by Lanczos. Based on these properties and the new general convergence results, we establish a number of insightful convergence results that relate accuracy of outer iterations to inner tolerance. It appears that the inexact RQI with Lanczos converges quadratically provided that ξk ≤ ξ with ξ a constant that can be bigger than one considerably, that is, the linear systems are solved with no accuracy in the sense of solving the linear systems. The results are fundamentally different from the existing quadratic convergence results and have a strong impact on effective implementations of the method. Based on the new theory, we design practical criteria to control inner tolerance to achieve quadratic convergence and implement the method much more effectively than ever before, so that much computational cost is saved. Numerical experiments support our theory and show its practical value.