Results 1  10
of
205
Iterative Solution of Linear Systems
 Acta Numerica
, 1992
"... this paper is as follows. In Section 2, we present some background material on general Krylov subspace methods, of which CGtype algorithms are a special case. We recall the outstanding properties of CG and discuss the issue of optimal extensions of CG to nonHermitian matrices. We also review GMRES ..."
Abstract

Cited by 103 (8 self)
 Add to MetaCart
this paper is as follows. In Section 2, we present some background material on general Krylov subspace methods, of which CGtype algorithms are a special case. We recall the outstanding properties of CG and discuss the issue of optimal extensions of CG to nonHermitian matrices. We also review GMRES and related methods, as well as CGlike algorithms for the special case of Hermitian indefinite linear systems. Finally, we briefly discuss the basic idea of preconditioning. In Section 3, we turn to Lanczosbased iterative methods for general nonHermitian linear systems. First, we consider the nonsymmetric Lanczos process, with particular emphasis on the possible breakdowns and potential instabilities in the classical algorithm. Then we describe recent advances in understanding these problems and overcoming them by using lookahead techniques. Moreover, we describe the quasiminimal residual algorithm (QMR) proposed by Freund and Nachtigal (1990), which uses the lookahead Lanczos process to obtain quasioptimal approximate solutions. Next, a survey of transposefree Lanczosbased methods is given. We conclude this section with comments on other related work and some historical remarks. In Section 4, we elaborate on CGNR and CGNE and we point out situations where these approaches are optimal. The general class of Krylov subspace methods also contains parameterdependent algorithms that, unlike CGtype schemes, require explicit information on the spectrum of the coefficient matrix. In Section 5, we discuss recent insights in obtaining appropriate spectral information for parameterdependent Krylov subspace methods. After that, 4 R.W. Freund, G.H. Golub and N.M. Nachtigal
Latent Semantic Indexing (LSI) and TREC2
 The Second Text REtrieval Conference (TREC2
, 1994
"... this paper. The "ltc" weights were computed on this matrix. 3.2 SVD analysis ..."
Abstract

Cited by 99 (3 self)
 Add to MetaCart
this paper. The "ltc" weights were computed on this matrix. 3.2 SVD analysis
Exploiting latent semantic information in statistical language modeling
 Proc. IEEE. 88
, 2000
"... Statistical language models used in largevocabulary speech recognition must properly encapsulate the various constraints, both local and global, present in the language. While local constraints are readily captured through ngram modeling, global constraints, such as longterm semantic dependencies ..."
Abstract

Cited by 78 (5 self)
 Add to MetaCart
Statistical language models used in largevocabulary speech recognition must properly encapsulate the various constraints, both local and global, present in the language. While local constraints are readily captured through ngram modeling, global constraints, such as longterm semantic dependencies, have been more difficult to handle within a datadriven formalism. This paper focuses on the use of latent semantic analysis, a paradigm that automatically uncovers the salient semantic relationships between words and documents in a given corpus. In this approach, (discrete) words and documents are mapped onto a (continuous) semantic vector space, in which familiar clustering techniques can be applied. This leads to the specification of a powerful framework for automatic semantic classification, as well as the derivation of several language model families with various smoothing properties. Because of their largespan nature, these language models are well suited to complement conventional ngrams. An integrative formulation is proposed for harnessing this synergy, in which the latent semantic information is used to adjust the standard ngram probability. Such hybrid language modeling compares favorably with the correspondingngram baseline: experiments conducted on the Wall Street Journal domain show a reduction in average word error rate of over 20%. This paper concludes with a discussion of intrinsic tradeoffs, such as the influence of training data selection on the resulting performance. Keywords—Latent semantic analysis, multispan integration, ngrams, speech recognition, statistical language modeling. I.
On the convergence of reflective Newton methods for largescale nonlinear minimization subject to bounds
, 1992
"... . We consider a new algorithm, a reflective Newton method, for the problem of minimizing a smooth nonlinear function of many variables, subject to upper and/or lower bounds on some of the variables. This approach generates strictly feasible iterates by following piecewise linear paths ("reflect ..."
Abstract

Cited by 65 (4 self)
 Add to MetaCart
. We consider a new algorithm, a reflective Newton method, for the problem of minimizing a smooth nonlinear function of many variables, subject to upper and/or lower bounds on some of the variables. This approach generates strictly feasible iterates by following piecewise linear paths ("reflection" paths) to generate improved iterates. The reflective Newton approach does not require identification of an "activity set". In this report we establish that the reflective Newton approach is globally and quadratically convergent. Moreover, we develop a specific example of this general reflective path approach suitable for largescale and sparse problems. 1 Research partially supported by the Applied Mathematical Sciences Research Program (KC04 02) of the Office of Energy Research of the U.S. Department of Energy under grant DEFG0286ER25013. A000, and in part by NSF, AFOSR, and ONR through grant DMS8920550, and by the Cornell Theory Center which receives major funding from the National Sci...
SVDPACKC (Version 1.0) User's Guide
, 1993
"... SVDPACKC comprises four numerical (iterative) methods for computing the singular value decomposition (SVD) of large sparse matrices using ANSI C. This software package implements Lanczos and subspace iterationbased methods for determining several of the largest singular triplets (singular values an ..."
Abstract

Cited by 64 (4 self)
 Add to MetaCart
SVDPACKC comprises four numerical (iterative) methods for computing the singular value decomposition (SVD) of large sparse matrices using ANSI C. This software package implements Lanczos and subspace iterationbased methods for determining several of the largest singular triplets (singular values and corresponding left and rightsingular vectors) for large sparse matrices. The package has been ported to a variety of machines ranging from supercomputers to workstations: CRAY YMP, IBM RS/6000550, DEC 5000100, HP 9000750, SPARCstation 2, and Macintosh II/fx. This document (i) explains each algorithm in some detail, (ii) explains the input parameters for each program, (iii) explains how to compile/execute each program, and (iv) illustrates the performance of each method when we compute lower rank approximations to sparse termdocument matrices from information retrieval applications. A userfriendly software interface to the package for UNIXbased systems and the Macintosh II/fx is als...
Automating the Assignment of Submitted Manuscripts to Reviewers
 In Research and Development in Information Retrieval
, 1992
"... The 117 manuscripts submitted for the Hypertext'91 conference were assigned to members of the review committee, using a variety of automated methods based on information retrieval principles and Latent Semantic Indexing. Fifteen reviewers provided exhaustive ratings for the submitted abstracts, ..."
Abstract

Cited by 56 (2 self)
 Add to MetaCart
The 117 manuscripts submitted for the Hypertext'91 conference were assigned to members of the review committee, using a variety of automated methods based on information retrieval principles and Latent Semantic Indexing. Fifteen reviewers provided exhaustive ratings for the submitted abstracts, indicating how well each abstract matched their interests. The automated methods do a fairly good job of assigning relevant papers for review, but they are still somewhat poorer than assignments made manually by human experts and substantially poorer than an assignment perfectly matching the reviewers' own ranking of the papers. A new automated assignment method called "n of 2n" achieves better performance than human experts by sending reviewers more papers than they actually have to review and then allowing them to choose part of their review load themselves. Keywords: Conferences, Program Committees, Reviewers, Referees, Manuscripts, Papers, Assignment, Matching, Interests, Latent Semantic Ind...
Nonrigid point set registration: Coherent Point Drift (CPD)
 IN ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 19
, 2006
"... We introduce Coherent Point Drift (CPD), a novel probabilistic method for nonrigid registration of point sets. The registration is treated as a Maximum Likelihood (ML) estimation problem with motion coherence constraint over the velocity field such that one point set moves coherently to align with ..."
Abstract

Cited by 55 (0 self)
 Add to MetaCart
We introduce Coherent Point Drift (CPD), a novel probabilistic method for nonrigid registration of point sets. The registration is treated as a Maximum Likelihood (ML) estimation problem with motion coherence constraint over the velocity field such that one point set moves coherently to align with the second set. We formulate the motion coherence constraint and derive a solution of regularized ML estimation through the variational approach, which leads to an elegant kernel form. We also derive the EM algorithm for the penalized ML optimization with deterministic annealing. The CPD method simultaneously finds both the nonrigid transformation and the correspondence between two point sets without making any prior assumption of the transformation model except that of motion coherence. This method can estimate complex nonlinear nonrigid transformations, and is shown to be accurate on 2D and 3D examples and robust in the presence of outliers and missing points.
Reducedorder modeling techniques based on Krylov subspaces and their use in circuit simulation
, 1998
"... ..."
An Implicitly Restarted Lanczos Method for Large Symmetric. . .
 ETNA
, 1994
"... . The Lanczos process is a well known technique for computing a few, say k, eigenvalues and associated eigenvectors of a large symmetric nn matrix. However, loss of orthogonality of the computed Krylov subspace basis can reduce the accuracy of the computed approximate eigenvalues. In the implicitly ..."
Abstract

Cited by 54 (13 self)
 Add to MetaCart
. The Lanczos process is a well known technique for computing a few, say k, eigenvalues and associated eigenvectors of a large symmetric nn matrix. However, loss of orthogonality of the computed Krylov subspace basis can reduce the accuracy of the computed approximate eigenvalues. In the implicitly restarted Lanczos method studied in the present paper, this problem is addressed by fixing the number of steps in the Lanczos process at a prescribed value, k +p, where p typically is not much larger, and may be smaller, than k. Orthogonality of the k + p basis vectors of the Krylov subspace is secured by reorthogonalizing these vectors when necessary. The implicitly restarted Lanczos method exploits that the residual vector obtained by the Lanczos process is a function of the initial Lanczos vector. The method updates the initial Lanczos vector through an iterative scheme. The purpose of the iterative scheme is to determine an initial vector such that the associated residual vector is tiny....