Results 1  10
of
24
Matrices, vector spaces, and information retrieval
 SIAM Review
, 1999
"... Abstract. The evolution of digital libraries and the Internet has dramatically transformed the processing, storage, and retrieval of information. Efforts to digitize text, images, video, and audio now consume a substantial portion of both academic and industrial activity. Even when there is no short ..."
Abstract

Cited by 138 (3 self)
 Add to MetaCart
(Show Context)
Abstract. The evolution of digital libraries and the Internet has dramatically transformed the processing, storage, and retrieval of information. Efforts to digitize text, images, video, and audio now consume a substantial portion of both academic and industrial activity. Even when there is no shortage of textual materials on a particular topic, procedures for indexing or extracting the knowledge or conceptual information contained in them can be lacking. Recently developed information retrieval technologies are based on the concept of a vector space. Data are modeled as a matrix, and a user’s query of the database is represented as a vector. Relevant documents in the database are then identified via simple vector operations. Orthogonal factorizations of the matrix provide mechanisms for handling uncertainty in the database itself. The purpose of this paper is to show how such fundamental mathematical concepts from linear algebra can be used to manage and index large text collections. Key words. information retrieval, linear algebra, QR factorization, singular value decomposition, vector spaces
Analysis of the Cholesky decomposition of a semidefinite matrix
 in Reliable Numerical Computation
, 1990
"... Perturbation theory is developed for the Cholesky decomposition of an n × n symmetric positive semidefinite matrix A of rank r. The matrix W = A −1 11 A12 is found to play a key role in the perturbation bounds, where A11 and A12 are r × r and r × (n − r) submatrices of A respectively. A backward er ..."
Abstract

Cited by 65 (4 self)
 Add to MetaCart
(Show Context)
Perturbation theory is developed for the Cholesky decomposition of an n × n symmetric positive semidefinite matrix A of rank r. The matrix W = A −1 11 A12 is found to play a key role in the perturbation bounds, where A11 and A12 are r × r and r × (n − r) submatrices of A respectively. A backward error analysis is given; it shows that the computed Cholesky factors are the exact ones of a matrix whose distance from A is bounded by 4r(r + 1) � �W �2+1 � 2 u�A�2+O(u 2), where u is the unit roundoff. For the complete pivoting strategy it is shown that �W � 2 2 ≤ 1 3 (n −r)(4r −1), and empirical evidence that �W �2 is usually small is presented. The overall conclusion is that the Cholesky algorithm with complete pivoting is stable for semidefinite matrices. Similar perturbation results are derived for the QR decomposition with column pivoting and for the LU decomposition with complete pivoting. The results give new insight into the reliability of these decompositions in rank estimation. Key words. Cholesky decomposition, positive semidefinite matrix, perturbation theory, backward error analysis, QR decomposition, rank estimation, LINPACK.
Matrix nearness problems and applications
 Applications of Matrix Theory
, 1989
"... A matrix nearness problem consists of finding, for an arbitrary matrix A, a nearest member of some given class of matrices, where distance is measured in a matrix norm. A survey of nearness problems is given, with particular emphasis on the fundamental properties of symmetry, positive definiteness, ..."
Abstract

Cited by 56 (7 self)
 Add to MetaCart
A matrix nearness problem consists of finding, for an arbitrary matrix A, a nearest member of some given class of matrices, where distance is measured in a matrix norm. A survey of nearness problems is given, with particular emphasis on the fundamental properties of symmetry, positive definiteness, orthogonality, normality, rankdeficiency and instability. Theoretical results and computational methods are described. Applications of nearness problems in areas including control theory, numerical analysis and statistics are outlined.
Perturbation Theory for the Singular Value Decomposition
 IN SVD AND SIGNAL PROCESSING, II: ALGORITHMS, ANALYSIS AND APPLICATIONS
, 1990
"... The singular value decomposition has a number of applications in digital signal processing. However, the the decomposition must be computed from a matrix consisting of both signal and noise. It is therefore important to be able to assess the effects of the noise on the singular values and singular v ..."
Abstract

Cited by 49 (0 self)
 Add to MetaCart
The singular value decomposition has a number of applications in digital signal processing. However, the the decomposition must be computed from a matrix consisting of both signal and noise. It is therefore important to be able to assess the effects of the noise on the singular values and singular vectors  a problem in classical perturbation theory. In this paper we survey the perturbation theory of the singular value decomposition.
Computing RankRevealing QR Factorizations of Dense Matrices
 Argonne Preprint ANLMCSP5590196, Argonne National Laboratory
, 1996
"... this paper, and we give only a brief synopsis here. For details, the reader is referred to the code. Test matrices 1 through 5 were designed to exercise column pivoting. Matrix 6 was designed to test the behavior of the condition estimation in the presence of clusters for the smallest singular value ..."
Abstract

Cited by 39 (2 self)
 Add to MetaCart
this paper, and we give only a brief synopsis here. For details, the reader is referred to the code. Test matrices 1 through 5 were designed to exercise column pivoting. Matrix 6 was designed to test the behavior of the condition estimation in the presence of clusters for the smallest singular value. For the other cases, we employed the LAPACK matrix generator xLATMS, which generates random symmetric matrices by multiplying a diagonal matrix with prescribed singular values by random orthogonal matrices from the left and right. For the break1 distribution, all singular values are 1.0 except for one. In the arithmetic and geometric distributions, they decay from 1.0 to a specified smallest singular value in an arithmetic and geometric fashion, respectively. In the "reversed" distributions, the order of the diagonal entries was reversed. For test cases 7 though 12, we used xLATMS to generate a matrix of order
Collinearity and Least Squares Regression
 Statistical Science
, 1987
"... this paper we introduce certain numbers, called collinearity indices, which are useful in detecting near collinearities in regression problems. The coefficients enter adversely into formulas concerning significance testing and the effects of errors in the regression variables. Thus they provide simp ..."
Abstract

Cited by 27 (2 self)
 Add to MetaCart
this paper we introduce certain numbers, called collinearity indices, which are useful in detecting near collinearities in regression problems. The coefficients enter adversely into formulas concerning significance testing and the effects of errors in the regression variables. Thus they provide simple regression diagnostics, suitable for incorporation in regression packages. Keywords and phrases: collinearity, illconditioning, linear regression, errors in the variables, regression diagnostics. 1 Introduction
A method for solving certain quadratic programming problems arising in nonsmooth optimization
 IMA Journal of Numerical Analysis
, 1986
"... We present a finite algorithm for minimizing a piecewise linear convex function augmented with a simple quadratic term. To solve the dual problem, which is of leastsquares form with an additional linear term, we include in a standard activeset quadratic programming algorithm a new columnexchange ..."
Abstract

Cited by 18 (1 self)
 Add to MetaCart
We present a finite algorithm for minimizing a piecewise linear convex function augmented with a simple quadratic term. To solve the dual problem, which is of leastsquares form with an additional linear term, we include in a standard activeset quadratic programming algorithm a new columnexchange strategy for treating positive semidefinite problems. Numerical results are given for an implementation using the Cholesky factorization. 1.
Determining Rank in the Presence of Error
 IN
, 1993
"... The problem of determining rank in the presence of error occurs in a number of applications. The usual approach is to compute a rankrevealing decomposition and make a decision about the rank by examining the small elements of the decomposition. In this paper we look at three commonly use decomposit ..."
Abstract

Cited by 14 (0 self)
 Add to MetaCart
The problem of determining rank in the presence of error occurs in a number of applications. The usual approach is to compute a rankrevealing decomposition and make a decision about the rank by examining the small elements of the decomposition. In this paper we look at three commonly use decompositions: the singular value decomposition, the pivoted QR decomposition, and the URV decomposition.
ON THE COMPUTATION OF NULL SPACES OF SPARSE RECTANGULAR MATRICES
"... Abstract. Computing the null space of a sparse matrix, sometimes a rectangular sparse matrix, is an important part of some computations, such as embeddings and parametrization of meshes. We propose an efficient and reliable method to compute an orthonormal basis of the null space of a sparse square ..."
Abstract

Cited by 12 (0 self)
 Add to MetaCart
(Show Context)
Abstract. Computing the null space of a sparse matrix, sometimes a rectangular sparse matrix, is an important part of some computations, such as embeddings and parametrization of meshes. We propose an efficient and reliable method to compute an orthonormal basis of the null space of a sparse square or rectangular matrix (usually with more rows than columns). The main computational component in our method is a sparse LU factorization with partial pivoting of the input matrix; this factorization is significantly cheaper than the QR factorization used in previous methods. The paper analyzes important theoretical aspects of the new method and demonstrates experimentally that it is efficient and reliable. 1.