Results 1  10
of
26
Bundle adjustment – a modern synthesis
 Vision Algorithms: Theory and Practice, LNCS
, 2000
"... This paper is a survey of the theory and methods of photogrammetric bundle adjustment, aimed at potential implementors in the computer vision community. Bundle adjustment is the problem of refining a visual reconstruction to produce jointly optimal structure and viewing parameter estimates. Topics c ..."
Abstract

Cited by 386 (12 self)
 Add to MetaCart
This paper is a survey of the theory and methods of photogrammetric bundle adjustment, aimed at potential implementors in the computer vision community. Bundle adjustment is the problem of refining a visual reconstruction to produce jointly optimal structure and viewing parameter estimates. Topics covered include: the choice of cost function and robustness; numerical optimization including sparse Newton methods, linearly convergent approximations, updating and recursive methods; gauge (datum) invariance; and quality control. The theory is developed for general robust cost functions rather than restricting attention to traditional nonlinear least squares.
Krylov Projection Methods For Model Reduction
, 1997
"... This dissertation focuses on efficiently forming reducedorder models for large, linear dynamic systems. Projections onto unions of Krylov subspaces lead to a class of reducedorder models known as rational interpolants. The cornerstone of this dissertation is a collection of theory relating Krylov p ..."
Abstract

Cited by 119 (3 self)
 Add to MetaCart
This dissertation focuses on efficiently forming reducedorder models for large, linear dynamic systems. Projections onto unions of Krylov subspaces lead to a class of reducedorder models known as rational interpolants. The cornerstone of this dissertation is a collection of theory relating Krylov projection to rational interpolation. Based on this theoretical framework, three algorithms for model reduction are proposed. The first algorithm, dual rational Arnoldi, is a numerically reliable approach involving orthogonal projection matrices. The second, rational Lanczos, is an efficient generalization of existing Lanczosbased methods. The third, rational power Krylov, avoids orthogonalization and is suited for parallel or approximate computations. The performance of the three algorithms is compared via a combination of theory and examples. Independent of the precise algorithm, a host of supporting tools are also developed to form a complete modelreduction package. Techniques for choosing the matching frequencies, estimating the modeling error, insuring the model's stability, treating multipleinput multipleoutput systems, implementing parallelism, and avoiding a need for exact factors of large matrix pencils are all examined to various degrees.
Flexible conjugate gradients
 SIAM J. Sci. Comput
, 2000
"... Abstract. We analyze the conjugate gradient (CG) method with preconditioning slightly variable from one iteration to the next. To maintain the optimal convergence properties, we consider a variant proposed by Axelsson that performs an explicit orthogonalization of the search directions vectors. For ..."
Abstract

Cited by 43 (7 self)
 Add to MetaCart
Abstract. We analyze the conjugate gradient (CG) method with preconditioning slightly variable from one iteration to the next. To maintain the optimal convergence properties, we consider a variant proposed by Axelsson that performs an explicit orthogonalization of the search directions vectors. For this method, which we refer to as flexible CG, we develop a theoretical analysis that shows that the convergence rate is essentially independent of the variations in the preconditioner as long as the latter are kept sufficiently small. We further discuss the real convergence rate on the basis of some heuristic arguments supported by numerical experiments. Depending on the eigenvalue distribution corresponding to the fixed reference preconditioner, several situations have to be distinguished. In some cases, the convergence is as fast with truncated versions of the algorithm or even with the standard CG method, whereas quite large variations are allowed without too much penalty. In other cases, the flexible variant effectively outperforms the standard method, while the need for truncation limits the size of the variations that can be reasonably allowed.
From Potential Theory To Matrix Iterations In Six Steps
 SIAM REVIEW
"... The theory of the convergence of Krylov subspace iterations for linear systems of equations (conjugate gradients, biconjugate gradients, GMRES, QMR, BiCGSTAB, ...) is reviewed. For a computation of this kind, an estimated asymptotic convergence factor ae 1 can be derived by solving a problem of pot ..."
Abstract

Cited by 35 (4 self)
 Add to MetaCart
The theory of the convergence of Krylov subspace iterations for linear systems of equations (conjugate gradients, biconjugate gradients, GMRES, QMR, BiCGSTAB, ...) is reviewed. For a computation of this kind, an estimated asymptotic convergence factor ae 1 can be derived by solving a problem of potential theory or conformal mapping. Six approximations are involved in relating the actual computation to this scalar estimate. These six approximations are discussed in a systematic way and illustrated by a sequence of examples computed with tools of numerical conformal mapping and semidefinite programming.
Estimates in Quadratic Formulas
, 1994
"... Let A be a real symmetric positive definite matrix. We consider three particular questions, namely estimates for the error in linear systems Ax = b, minimizing quadratic functional min x (x T Ax \Gamma 2b T x) subject to the constraint k x k= ff, ff !k A \Gamma1 b k, and estimates for the e ..."
Abstract

Cited by 26 (7 self)
 Add to MetaCart
Let A be a real symmetric positive definite matrix. We consider three particular questions, namely estimates for the error in linear systems Ax = b, minimizing quadratic functional min x (x T Ax \Gamma 2b T x) subject to the constraint k x k= ff, ff !k A \Gamma1 b k, and estimates for the entries of the matrix inverse A \Gamma1 . All of these questions can be formulated as a problem of finding an estimate or an upper and lower bound on u T F (A)u, where F (A) = A \Gamma1 resp. F (A) = A \Gamma2 , u is a real vector. This problem can be considered in terms of estimates in the Gaußtype quadrature formulas which can be effectively computed exploiting the underlying Lanczos process. Using this approach, we first recall the exact arithmetic solution of the questions formulated above and then analyze the effect of rounding errors in the quadrature calculations. It is proved that the basic relation between the accuracy of Gauß quadrature for f() = \Gamma1 and the rate of ...
Inexact Krylov subspace methods for linear systems
, 2002
"... There is a class of linear problems for which the computation of the matrixvector product is very expensive since a time consuming approximation method is necessary to compute it with some prescribed relative precision. In this paper we investigate the e#ect of an approximately computed matrixvect ..."
Abstract

Cited by 26 (4 self)
 Add to MetaCart
There is a class of linear problems for which the computation of the matrixvector product is very expensive since a time consuming approximation method is necessary to compute it with some prescribed relative precision. In this paper we investigate the e#ect of an approximately computed matrixvector product on the convergence and accuracy of several Krylov subspace solvers. The obtained insight is used to tune the precision of the matrixvector product in every iteration so that an overall e#cient process is obtained. This gives the empirical relaxation strategy of Bouras and Fraysse proposed in [2]. These strategies can lead to considerable savings over the standard approach of using a fixed relative precision for the matrixvector product in every step. We will argue that the success of a relaxation strategy depends on the underlying way the Krylov subspace is constructed and not on the optimality properties for the residuals. Our analysis leads to an improved version of a strategy of Bouras, Fraysse, and Giraud [3] for the Conjugate Gradient method in case of Hermitian indefinite matrices.
Error analysis of the Lanczos algorithm for the nonsymmetric eigenvalue problem
 In R
, 1994
"... Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at ..."
Abstract

Cited by 19 (3 self)
 Add to MetaCart
Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at
Lanczos Methods For The Solution Of Nonsymmetric Systems Of Linear Equations
, 1992
"... . The Lanczos or biconjugate gradient method is often an effective means for solving nonsymmetric systems of linear equations. However, the method sometimes experiences breakdown, a near division by zero which may hinder or preclude convergence. In this paper we present some theoretical results on t ..."
Abstract

Cited by 17 (3 self)
 Add to MetaCart
. The Lanczos or biconjugate gradient method is often an effective means for solving nonsymmetric systems of linear equations. However, the method sometimes experiences breakdown, a near division by zero which may hinder or preclude convergence. In this paper we present some theoretical results on the nature and likelihood of the phenomenon of breakdown. We also define several new algorithms which substantially mitigate the problem of breakdown. Numerical comparisons of the new algorithms and the standard algorithms are given. Key words. linear systems, iterative methods, nonsymmetric, Lanczos AMS(MOS) subject classifications. 65F10, 65F15 1. Introduction. In this paper we consider methods for solving the linear system of equations Au = b; (1) where A 2 I C N \ThetaN is a given nonsingular matrix. When A is large and sparse, iterative methods in many cases are effective means for solving (1). In particular, when A is Hermitian and positive definite (HPD), the conjugate gradient (C...
Differences in the effects of rounding errors in Krylov solvers for symmetric indefinite linear systems
, 1999
"... The 3term Lanczos process leads, for a symmetric matrix, to bases for Krylov subspaces of increasing dimension. The Lanczos basis, together with the recurrence coefficients, can be used for the solution of symmetric indefinite linear systems, by solving the reduced system in one way or another. Thi ..."
Abstract

Cited by 15 (0 self)
 Add to MetaCart
The 3term Lanczos process leads, for a symmetric matrix, to bases for Krylov subspaces of increasing dimension. The Lanczos basis, together with the recurrence coefficients, can be used for the solution of symmetric indefinite linear systems, by solving the reduced system in one way or another. This leads to wellknown methods: MINRES, GMRES, and SYMMLQ. We will discuss in what way and to what extent these approaches differ in their sensitivity to rounding errors. In our analysis we will assume that the Lanczos basis is generated in exactly the same way for the different methods, and we will not consider the errors in the Lanczos process itself. We will show that the method of solution may lead, under certain circumstances, to large additional errors, that are not corrected by continuing the iteration process. Our findings are supported and illustrated by numerical examples. 1 Introduction We will consider iterative methods for the construction of approximate solutions, starting with...
Componentwise Error Analysis for Stationary Iterative Methods
, 1993
"... How small can a stationary iterative method for solving a linear system Ax = b make the error and the residual in the presence of rounding errors? We give a componentwise error analysis that provides an answer to this question and we examine the implications for numerical stability. The Jacobi, Gau ..."
Abstract

Cited by 11 (6 self)
 Add to MetaCart
How small can a stationary iterative method for solving a linear system Ax = b make the error and the residual in the presence of rounding errors? We give a componentwise error analysis that provides an answer to this question and we examine the implications for numerical stability. The Jacobi, GaussSeidel and successive overrelaxation methods are all found to be forward stable in a componentwise sense and backward stable in a normwise sense, provided certain conditions are satisfied that involve the matrix, its splitting, and the computed iterates. We show that the stronger property of componentwise backward stability can be achieved using one step of iterative refinement in fixed precision, under suitable assumptions. Key words. stationary iteration, Jacobi method, GaussSeidel method, successive overrelaxation, error analysis, numerical stability. AMS subject classifications. primary 65F10, 65G05 1 Introduction The effect of rounding errors on LU and QR factorization methods f...