Results 1  10
of
547
Bundle Adjustment  A Modern Synthesis
 VISION ALGORITHMS: THEORY AND PRACTICE, LNCS
, 2000
"... This paper is a survey of the theory and methods of photogrammetric bundle adjustment, aimed at potential implementors in the computer vision community. Bundle adjustment is the problem of refining a visual reconstruction to produce jointly optimal structure and viewing parameter estimates. Topics c ..."
Abstract

Cited by 420 (10 self)
 Add to MetaCart
(Show Context)
This paper is a survey of the theory and methods of photogrammetric bundle adjustment, aimed at potential implementors in the computer vision community. Bundle adjustment is the problem of refining a visual reconstruction to produce jointly optimal structure and viewing parameter estimates. Topics covered include: the choice of cost function and robustness; numerical optimization including sparse Newton methods, linearly convergent approximations, updating and recursive methods; gauge (datum) invariance; and quality control. The theory is developed for general robust cost functions rather than restricting attention to traditional nonlinear least squares.
An introduction to the conjugate gradient method without the agonizing pain
, 1994
"... ..."
(Show Context)
The Quadratic Eigenvalue Problem
, 2001
"... . We survey the quadratic eigenvalue problem, treating its many applications, its mathematical properties, and a variety of numerical solution techniques. Emphasis is given to exploiting both the structure of the matrices in the problem (dense, sparse, real, complex, Hermitian, skewHermitian) and t ..."
Abstract

Cited by 174 (18 self)
 Add to MetaCart
(Show Context)
. We survey the quadratic eigenvalue problem, treating its many applications, its mathematical properties, and a variety of numerical solution techniques. Emphasis is given to exploiting both the structure of the matrices in the problem (dense, sparse, real, complex, Hermitian, skewHermitian) and the spectral properties of the problem. We classify numerical methods and catalogue available software. Key words. quadratic eigenvalue problem, eigenvalue, eigenvector, matrix, matrix polynomial, secondorder differential equation, vibration, Millennium footbridge, overdamped system, gyroscopic system, linearization, backward error, pseudospectrum, condition number, Krylov methods, Arnoldi method, Lanczos method, JacobiDavidson method AMS subject classifications. 65F30 Contents 1 Introduction 2 2 Applications of QEPs 4 2.1 Secondorder differential equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 2.2 Vibration analysis of structural systems ...
Deeper inside pagerank
 Internet Mathematics
, 2004
"... Abstract. This paper serves as a companion or extension to the “Inside PageRank” paper by Bianchini et al. [Bianchini et al. 03]. It is a comprehensive survey of all issues associated with PageRank, covering the basic PageRank model, available and recommended solution methods, storage issues, existe ..."
Abstract

Cited by 158 (5 self)
 Add to MetaCart
(Show Context)
Abstract. This paper serves as a companion or extension to the “Inside PageRank” paper by Bianchini et al. [Bianchini et al. 03]. It is a comprehensive survey of all issues associated with PageRank, covering the basic PageRank model, available and recommended solution methods, storage issues, existence, uniqueness, and convergence properties, possible alterations to the basic model, suggested alternatives to the traditional solution methods, sensitivity and conditioning, and finally the updating problem. We introduce a few new results, provide an extensive reference list, and speculate about exciting areas of future research. 1.
Matrices, vector spaces, and information retrieval
 SIAM Review
, 1999
"... Abstract. The evolution of digital libraries and the Internet has dramatically transformed the processing, storage, and retrieval of information. Efforts to digitize text, images, video, and audio now consume a substantial portion of both academic and industrial activity. Even when there is no short ..."
Abstract

Cited by 120 (2 self)
 Add to MetaCart
(Show Context)
Abstract. The evolution of digital libraries and the Internet has dramatically transformed the processing, storage, and retrieval of information. Efforts to digitize text, images, video, and audio now consume a substantial portion of both academic and industrial activity. Even when there is no shortage of textual materials on a particular topic, procedures for indexing or extracting the knowledge or conceptual information contained in them can be lacking. Recently developed information retrieval technologies are based on the concept of a vector space. Data are modeled as a matrix, and a user’s query of the database is represented as a vector. Relevant documents in the database are then identified via simple vector operations. Orthogonal factorizations of the matrix provide mechanisms for handling uncertainty in the database itself. The purpose of this paper is to show how such fundamental mathematical concepts from linear algebra can be used to manage and index large text collections. Key words. information retrieval, linear algebra, QR factorization, singular value decomposition, vector spaces
Preconditioning techniques for large linear systems: A survey
 J. COMPUT. PHYS
, 2002
"... This article surveys preconditioning techniques for the iterative solution of large linear systems, with a focus on algebraic methods suitable for general sparse matrices. Covered topics include progress in incomplete factorization methods, sparse approximate inverses, reorderings, parallelization i ..."
Abstract

Cited by 118 (5 self)
 Add to MetaCart
(Show Context)
This article surveys preconditioning techniques for the iterative solution of large linear systems, with a focus on algebraic methods suitable for general sparse matrices. Covered topics include progress in incomplete factorization methods, sparse approximate inverses, reorderings, parallelization issues, and block and multilevel extensions. Some of the challenges ahead are also discussed. An extensive bibliography completes the paper.
B.: Deriving Private Information from Randomized Data
 37–48, ACM SIGMOD Conference
, 2005
"... Randomization has emerged as a useful technique for data disguising in privacypreserving data mining. Its privacy properties have been studied in a number of papers. Kargupta et al. challenged the randomization schemes, and they pointed out that randomization might not be able to preserve privacy. ..."
Abstract

Cited by 100 (2 self)
 Add to MetaCart
Randomization has emerged as a useful technique for data disguising in privacypreserving data mining. Its privacy properties have been studied in a number of papers. Kargupta et al. challenged the randomization schemes, and they pointed out that randomization might not be able to preserve privacy. However, it is still unclear what factors cause such a security breach, how they affect the privacy preserving property of the randomization, and what kinds of data have higher risk of disclosing their private contents even though they are randomized. We believe that the key factor is the correlations among attributes. We propose two data reconstruction methods that are based on data correlations. One method uses the Principal Component Analysis (PCA) technique, and the other method uses the Bayes Estimate (BE) technique. We have conducted theoretical and experimental analysis on the relationship between data correlations and the amount of private information that can be disclosed based our proposed data reconstructions schemes. Our studies have shown that when the correlations are high, the original data can be reconstructed more accurately, i.e., more private information can be disclosed. To improve privacy, we propose a modified randomization scheme, in which we let the correlation of random noises “similar ” to the original data. Our results have shown that the reconstruction accuracy of both PCAbased and BEbased schemes become worse as the similarity increases.
Constraint Preconditioning for Indefinite Linear Systems
 SIAM J. Matrix Anal. Appl
, 2000
"... . The problem of nding good preconditioners for the numerical solution of indenite linear systems is considered. Special emphasis is put on preconditioners that have a 2 2 block structure and which incorporate the (1; 2) and (2; 1) blocks of the original matrix. Results concerning the spectrum and ..."
Abstract

Cited by 81 (12 self)
 Add to MetaCart
(Show Context)
. The problem of nding good preconditioners for the numerical solution of indenite linear systems is considered. Special emphasis is put on preconditioners that have a 2 2 block structure and which incorporate the (1; 2) and (2; 1) blocks of the original matrix. Results concerning the spectrum and form of the eigenvectors of the preconditioned matrix and its minimum polynomial are given. The consequences of these results are considered for a variety of Krylov subspace methods. Numerical experiments validate these conclusions. Key words. preconditioning, indenite matrices, Krylov subspace methods AMS subject classications. 65F10, 65F15, 65F50 1. Introduction. In this paper, we are concerned with investigating a new class of preconditioners for indenite systems of linear equations of a sort which arise in constrained optimization as well as in leastsquares, saddlepoint and Stokes problems. We attempt to solve the indenite linear system A B T B 0  {z } A x 1 x...
Matrix Market: A Web Resource for Test Matrix Collections
 The Quality of Numerical Software: Assessment and Enhancement
, 1997
"... We describe a repository of data for the testing of numerical algorithms and mathematical software for matrix computations. The repository is designed to accommodate both dense and sparse matrices, as well as software to generate matrices. It has been seeded with the wellknown HarwellBoeing sparse ..."
Abstract

Cited by 81 (8 self)
 Add to MetaCart
(Show Context)
We describe a repository of data for the testing of numerical algorithms and mathematical software for matrix computations. The repository is designed to accommodate both dense and sparse matrices, as well as software to generate matrices. It has been seeded with the wellknown HarwellBoeing sparse matrix collection. The raw data files have been augmented with an integrated World Wide Web interface which describes the matrices in the collection quantitatively and visually. For example, each matrix has a Web page which details its attributes, graphically depicts its sparsity pattern, and provides access to the matrix itself in several formats. In addition, a search mechanism is included which allows retrieval of matrices based on a variety of attributes, such as type and size, as well as through freetext search in abstracts. The URL is http://math.nist.gov/MatrixMarket/ .
Improving MemorySystem Performance of Sparse MatrixVector Multiplication
 IBM Journal of Research and Development
, 1997
"... Sparse MatrixVector Multiplication is an important kernel that often runs inefficiently on superscalar RISC processors. This paper describe techniques that increase instructionlevel parallelism and improve performance. The techniques include reordering to reduce cache misses originally due to Das ..."
Abstract

Cited by 77 (0 self)
 Add to MetaCart
(Show Context)
Sparse MatrixVector Multiplication is an important kernel that often runs inefficiently on superscalar RISC processors. This paper describe techniques that increase instructionlevel parallelism and improve performance. The techniques include reordering to reduce cache misses originally due to Das et al., blocking to reduce load instructions, and prefetching to prevent multiple loadstore units from stalling simulteneously. The techniques improve performnance from about 40 Mflops (on a wellordered matrix) to over 100 Mflops on a 266 Mflops machine. The techniques are applicable to other superscalar RISC processors as well and have improved performance on a Sun UltraSparc I workstation, for example. 1 Introduction Sparse matrixvector multiplication is an important computational kernel in many iterative linear solvers (see [5], for example). Unfortunately, on many computers this kernel runs slowly relative to other numerical codes, such as dense matrix computations. This paper propos...