Results 1  10
of
26
Indexing by latent semantic analysis
 JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE
, 1990
"... A new method for automatic indexing and retrieval is described. The approach is to take advantage of implicit higherorder structure in the association of terms with documents (“semantic structure”) in order to improve the detection of relevant documents on the basis of terms found in queries. The p ..."
Abstract

Cited by 2703 (32 self)
 Add to MetaCart
A new method for automatic indexing and retrieval is described. The approach is to take advantage of implicit higherorder structure in the association of terms with documents (“semantic structure”) in order to improve the detection of relevant documents on the basis of terms found in queries. The particular technique used is singularvalue decomposition, in which a large term by document matrix is decomposed into a set of ca. 100 orthogonal factors from which the original matrix can be approximated by linear combination. Documents are represented by ca. 100 item vectors of factor weights. Queries are represented as pseudodocument vectors formed from weighted combinations of terms, and documents with suprathreshold cosine values are returned. initial tests find this completely automatic method for retrieval to be promising.
A solution to Plato’s problem: The latent semantic analysis theory of acquisition, induction, and representation of knowledge
 Psychological review
, 1997
"... How do people know as much as they do with as little information as they get? The problem takes many forms; learning vocabulary from text is an especially dramatic and convenient case for research. A new general theory of acquired similarity and knowledge representation, latent semantic analysis (LS ..."
Abstract

Cited by 1093 (9 self)
 Add to MetaCart
How do people know as much as they do with as little information as they get? The problem takes many forms; learning vocabulary from text is an especially dramatic and convenient case for research. A new general theory of acquired similarity and knowledge representation, latent semantic analysis (LSA), is presented and used to successfully simulate such learning and several other psycholinguistic phenomena. By inducing global knowledge indirectly from local cooccurrence data in a large body of representative text, LSA acquired knowledge about the full vocabulary of English at a comparable rate to schoolchildren. LSA uses no prior linguistic or perceptual similarity knowledge; it is based solely on a general mathematical learning method that achieves powerful inductive effects by extracting the right number of dimensions (e.g., 300) to represent objects and contexts. Relations to other theories, phenomena, and problems are sketched. Prologue "How much do we know at any time? Much more, or so I believe, than we know we know!" —Agatha Christie, The Moving Finger A typical American seventh grader knows the meaning of
SVDPACKC (Version 1.0) User's Guide
, 1993
"... SVDPACKC comprises four numerical (iterative) methods for computing the singular value decomposition (SVD) of large sparse matrices using ANSI C. This software package implements Lanczos and subspace iterationbased methods for determining several of the largest singular triplets (singular values an ..."
Abstract

Cited by 63 (4 self)
 Add to MetaCart
SVDPACKC comprises four numerical (iterative) methods for computing the singular value decomposition (SVD) of large sparse matrices using ANSI C. This software package implements Lanczos and subspace iterationbased methods for determining several of the largest singular triplets (singular values and corresponding left and rightsingular vectors) for large sparse matrices. The package has been ported to a variety of machines ranging from supercomputers to workstations: CRAY YMP, IBM RS/6000550, DEC 5000100, HP 9000750, SPARCstation 2, and Macintosh II/fx. This document (i) explains each algorithm in some detail, (ii) explains the input parameters for each program, (iii) explains how to compile/execute each program, and (iv) illustrates the performance of each method when we compute lower rank approximations to sparse termdocument matrices from information retrieval applications. A userfriendly software interface to the package for UNIXbased systems and the Macintosh II/fx is als...
A Jacobi–Davidson type SVD method
 SIAM J. Sci. Comput
, 2001
"... Abstract. We discuss a new method for the iterative computation of a portion of the singular values and vectors of a large sparse matrix. Similar to the Jacobi–Davidson method for the eigenvalue problem, we compute in each step a correction by (approximately) solving a correction equation. We give a ..."
Abstract

Cited by 24 (7 self)
 Add to MetaCart
Abstract. We discuss a new method for the iterative computation of a portion of the singular values and vectors of a large sparse matrix. Similar to the Jacobi–Davidson method for the eigenvalue problem, we compute in each step a correction by (approximately) solving a correction equation. We give a few variants of this Jacobi–Davidson SVD (JDSVD) method with their theoretical properties. It is shown that the JDSVD can be seen as an accelerated (inexact) Newton scheme. We experimentally compare the method with some other iterative SVD methods. Key words. Jacobi–Davidson, singular value decomposition (SVD), singular values, singular vectors, norm, augmented matrix, correction equation, (inexact) accelerated Newton, improving singular values AMS subject classifications. 65F15 (65F35) PII. S1064827500372973
Low Rank Matrix Approximation Using The Lanczos Bidiagonalization Process With Applications
 SIAM J. Sci. Comput
, 2000
"... Low rank approximation of large and/or sparse matrices is important in many applications. We show that good low rank matrix approximations can be directly obtained from the Lanczos bidiagonalization process without computing singular value decomposition. We also demonstrate that a socalled oneside ..."
Abstract

Cited by 23 (1 self)
 Add to MetaCart
Low rank approximation of large and/or sparse matrices is important in many applications. We show that good low rank matrix approximations can be directly obtained from the Lanczos bidiagonalization process without computing singular value decomposition. We also demonstrate that a socalled onesided reorthogonalization process can be used to maintain adequate level of orthogonality among the Lanczos vectors and produce accurate low rank approximations. This technique reduces the computational cost of the Lanczos bidiagonalization process. We illustrate the efficiency and applicability of our algorithm using numerical examples from several applications areas.
An Implicit Shift Bidiagonalization Algorithm For IllPosed Systems
 BIT
, 1994
"... . Iterative methods based on Lanczos bidiagonalization with full reorthogonalization (LBDR) are considered for solving large scale discrete illposed linear least squares problems of the form min x kAx \Gamma bk 2 . Methods for regularization in the Krylov subspaces are discussed which use generali ..."
Abstract

Cited by 18 (0 self)
 Add to MetaCart
. Iterative methods based on Lanczos bidiagonalization with full reorthogonalization (LBDR) are considered for solving large scale discrete illposed linear least squares problems of the form min x kAx \Gamma bk 2 . Methods for regularization in the Krylov subspaces are discussed which use generalized cross validation (GCV) for determining the regularization parameter. These methods have the advantage that no a priori information about the noise level is required. To improve convergence of the Lanczos process we apply a variant of the implicitly restarted Lanczos algorithm by Sorenson using zero shifts. Although this restarted method simply corresponds to using LBDR with a starting vector (AA T ) p b, it is shown that carrying out the process implicitly is essential for numerical stability. An LBDR algorithm is presented which incorporates implicit restarts to ensure that the global minimum of the CGV curve corresponds to a minimum on the curve for the truncated SVD solution. Nume...
Large Scale Sparse Singular Value Computations
 International Journal of Supercomputer Applications
, 1992
"... . In this paper, we present four numerical methods for computing the singular value decomposition (SVD) of large sparse matrices on a multiprocessor architecture. We particularly emphasize Lanczos and subspace iterationbased methods for determining several of the largest singular triplets (singular ..."
Abstract

Cited by 14 (0 self)
 Add to MetaCart
. In this paper, we present four numerical methods for computing the singular value decomposition (SVD) of large sparse matrices on a multiprocessor architecture. We particularly emphasize Lanczos and subspace iterationbased methods for determining several of the largest singular triplets (singular values and corresponding left and rightsingular vectors) for sparse matrices arising from two practical applications: information retrieval and seismic reflection tomography. The target architectures for our implementations of such methods are the Cray2S/4128 and Alliant FX/80. The sparse SVD problem is well motivated by recent informationretrieval techniques in which dominant singular values and their corresponding singular vectors of large sparse termdocument matrices are desired, and by nonlinear inverse problems from seismic tomography applications in which approximate pseudoinverses of large sparse Jacobian matrices are needed. It is hoped that this research will advance the dev...
Transfer Functions and Resolvent Norm Approximation of Large Matrices
 Electron. Trans. Numer. Anal
, 1998
"... . A unifying framework for methods employed in the approximation of the resolvent norm of nonnormal matrices is presented. This formulation uses specific transfer functions, and it provides new information about the approximation properties of these methods and their application in computing the pse ..."
Abstract

Cited by 9 (2 self)
 Add to MetaCart
. A unifying framework for methods employed in the approximation of the resolvent norm of nonnormal matrices is presented. This formulation uses specific transfer functions, and it provides new information about the approximation properties of these methods and their application in computing the pseudospectrum of matrices. Key words. Resolvent norm, transfer function, Arnoldi iteration, pseudospectrum. AMS subject classification. 65F15. 1. Introduction. We now know that the analysis of matrixdependent algorithms is considerably more complicated when nonnormal matrices are involved; see for example [5]. In particular, several studies indicate that the eigenvalues of the matrix in question often provide insufficient or even misleading information [20]. This has been the motivation behind recent research on more reliable indicators as well as on methods for their practical computation. Several studies concur that a better accordance between theory and practice can be achieved by using ...
Multiprocessor Sparse Svd Algorithms And Applications
, 1991
"... this memory is statically allocated, whereas on the Alliant FX/80 it is dynamically allocated as needed. On the Cray2S/4128, the vector z would be both retrieved from and written to core memory. However, on the Alliant FX/80, z may be fetched and held in the 512 kilobyte cache. Since memory accesse ..."
Abstract

Cited by 8 (3 self)
 Add to MetaCart
this memory is statically allocated, whereas on the Alliant FX/80 it is dynamically allocated as needed. On the Cray2S/4128, the vector z would be both retrieved from and written to core memory. However, on the Alliant FX/80, z may be fetched and held in the 512 kilobyte cache. Since memory accesses from the cache (fast local memory) can almost twice as fast as those from the larger globallyshared memory, we achieve an overall higher computational rate for multiplication by A
Restarted block Lanczos bidiagonalization methods, Numer. Algorithms
"... Abstract. The problem of computing a few of the largest or smallest singular values and associated singular vectors of a large matrix arises in many applications. This paper describes restarted block Lanczos bidiagonalization methods based on augmentation of Ritz vectors or harmonic Ritz vectors by ..."
Abstract

Cited by 7 (4 self)
 Add to MetaCart
Abstract. The problem of computing a few of the largest or smallest singular values and associated singular vectors of a large matrix arises in many applications. This paper describes restarted block Lanczos bidiagonalization methods based on augmentation of Ritz vectors or harmonic Ritz vectors by block Krylov subspaces. Key words. partial singular value decomposition, restarted iterative method, implicit shifts, augmentation. AMS subject classifications. 65F15, 15A18