Results 1  10
of
10
On the solution of equality constrained quadratic programming problems arising . . .
, 1998
"... ..."
A multifrontal QR factorization approach to distributed inference applied to multirobot localization and mapping
 in Proceedings of the American Association for Artificial Intelligence
, 2005
"... QR factorization is most often used as a “black box ” algorithm, but is in fact an elegant computation on a factor graph. By computing a rooted clique tree on this graph, the computation can be parallelized across subtrees, which forms the basis of socalled multifrontal QR methods. By judiciously c ..."
Abstract

Cited by 22 (8 self)
 Add to MetaCart
QR factorization is most often used as a “black box ” algorithm, but is in fact an elegant computation on a factor graph. By computing a rooted clique tree on this graph, the computation can be parallelized across subtrees, which forms the basis of socalled multifrontal QR methods. By judiciously choosing the order in which variables are eliminated in the clique tree computation, we show that one straightforwardly obtains a method for performing inference in distributed sensor networks. One obvious application is distributed localization and mapping with a team of robots. We phrase the problem as inference on a largescale Gaussian Markov Random Field induced by the measurement factor graph, and show how multifrontal QR on this graph solves for the global map and all the robot poses in a distributed fashion. The method is illustrated using both small and largescale simulations, and validated in practice through actual robot experiments.
Multifrontal multithreaded rankrevealing sparse QR factorization
"... SuiteSparseQR is a sparse QR factorization package based on the multifrontal method. Within each frontal matrix, LAPACK and the multithreaded BLAS enable the method to obtain high performance on multicore architectures. Parallelism across different frontal matrices is handled with Intel’s Threading ..."
Abstract

Cited by 10 (2 self)
 Add to MetaCart
SuiteSparseQR is a sparse QR factorization package based on the multifrontal method. Within each frontal matrix, LAPACK and the multithreaded BLAS enable the method to obtain high performance on multicore architectures. Parallelism across different frontal matrices is handled with Intel’s Threading Building Blocks library. The symbolic analysis and ordering phase preeliminates singletons by permuting the input matrix into the form [R11 R12; 0 A22] where R11 is upper triangular with diagonal entries above a given tolerance. Next, the fillreducing ordering, column elimination tree, and frontal matrix structures are found without requiring the formation of the pattern of A T A. Rankdetection is performed within each frontal matrix using Heath’s method, which does not require column pivoting. The resulting sparse QR factorization obtains a substantial fraction of the theoretical peak performance of a multicore computer.
USING PERTURBED QR FACTORIZATIONS TO SOLVE LINEAR LEASTSQUARES PROBLEMS
"... Abstract. We propose and analyze a new tool to help solve sparse linear leastsquares problems minx �Ax − b�2. Our method is based on asparseQR factorization of a lowrank perturbation Ä of A. Moreprecisely, we show that the R factor of Ä is an effective preconditioner for the leastsquares problem ..."
Abstract

Cited by 7 (6 self)
 Add to MetaCart
Abstract. We propose and analyze a new tool to help solve sparse linear leastsquares problems minx �Ax − b�2. Our method is based on asparseQR factorization of a lowrank perturbation Ä of A. Moreprecisely, we show that the R factor of Ä is an effective preconditioner for the leastsquares problem minx �Ax−b�2, when solved using LSQR. We propose applications for the new technique. When A is rank deficient we can add rows to ensure that the preconditioner is wellconditioned without column pivoting. When A is sparse except for a few dense rows we can drop these dense rows from A to obtain Ä. Another application is solving an updated or downdated problem. If R is a good preconditioner for the original problem A, it is a good preconditioner for the updated/downdated problem Ä. We can also solve whatif scenarios, where we want to find the solution if a column of the original matrix is changed/removed. We present a spectral theory that analyzes the generalized spectrum of the pencil (A ∗ A, R ∗ R) and analyze the applications. 1.
A Stable PrimalDual Approach for Linear Programming
"... This paper studies a primaldual interior/exteriorpoint pathfollowing approach for linearprogramming that is motivated on using an iterative solver rather than a direct solver for the search direction. We begin with the usual perturbed primaldual optimality equations Fu(x, y, z) = 0. Under nonde ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
This paper studies a primaldual interior/exteriorpoint pathfollowing approach for linearprogramming that is motivated on using an iterative solver rather than a direct solver for the search direction. We begin with the usual perturbed primaldual optimality equations Fu(x, y, z) = 0. Under nondegeneracy assumptions, this nonlinear system is wellposed,i.e. it has a nonsingular Jacobian at optimality and is not necessarily illconditioned as the iterates approach optimality. We use a simple preprocessing step to eliminate boththe primal and dual feasibility equations. This results in a single bilinear equation that maintains the wellposedness property. We then apply both a direct solution techniqueas well as a preconditioned conjugate gradient method (PCG), within an inexact Newton framework, directly on the linearized equations. This is done without forming the usualnormal equations, NEQ, or augmented system. Sparsity is maintained. The work of aniteration for the PCG approach consists almost entirely in the (approximate) solution of this wellposed linearized system. Therefore, improvements depend on efficient preconditioning.
Algorithm 9xx, SuiteSparseQR: multifrontal multithreaded rankrevealing sparse QR factorization
"... SuiteSparseQR is a sparse QR factorization package based on the multifrontal method. Within each frontal matrix, LAPACK and the multithreaded BLAS enable the method to obtain high performance on multicore architectures. Parallelism across different frontal matrices is handled with Intel’s Threading ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
SuiteSparseQR is a sparse QR factorization package based on the multifrontal method. Within each frontal matrix, LAPACK and the multithreaded BLAS enable the method to obtain high performance on multicore architectures. Parallelism across different frontal matrices is handled with Intel’s Threading Building Blocks library. The symbolic analysis and ordering phase preeliminates singletons by permuting the input matrix A into the form [R11R12; 0A22] where R11 is upper triangular with diagonal entries above a given tolerance. Next, the fillreducing ordering, column elimination tree, and frontal matrix structures are found without requiring the formation of the pattern of A T A. Approximate rankdetection is performed within each frontal matrix using Heath’s method. While Heath’s method is not always exact, it has the advantage of not requiring column pivoting and thus does not interfere with the fillreducing ordering. For sufficiently large problems, the resulting sparse QR factorization obtains a substantial fraction of the theoretical peak performance of a multicore computer.
Computing sparse orthogonal factors in MATLAB
, 1998
"... In this report a new version of the multifrontal sparse QR factorization routine sqr, originally by Matstoms, for general sparse matrices is described and evaluated. In the previous version the orthogonal factor Q is discarded due to storage considerations. The new version provides Q and uses the mu ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
In this report a new version of the multifrontal sparse QR factorization routine sqr, originally by Matstoms, for general sparse matrices is described and evaluated. In the previous version the orthogonal factor Q is discarded due to storage considerations. The new version provides Q and uses the multifrontal structure to store this orthogonal factor in a compact way. A new data class with overloaded operators is implemented in Matlab to provide an easy usage of the compact orthogonal factors. This implicit way of storing the orthogonal factor also results in faster computation and application of Q and Q T . Examples are given, where the new version is up to four times faster when computing only R and up to 1000 times faster when computing both Q and R, than the builtin function qr in Matlab. The sqr package is available at URL: http://www.mai.liu.se/~milun/sls/. Key words: QR factorization, sparse problems, multifrontal method, orthogonal factorization. 1 Introduction. Let A 2 IR...
A Partially Fixed Linearization Approach for SubmapParametrized Smoothing and Mapping
, 2005
"... We present an extension of a smoothing approach to Simultaneous Localization and Mapping (SLAM). We have previously introduced SquareRoot SAM, a Smoothing and Mapping approach to SLAM based on LevenbergMarquardt (LM) optimization. It iteratively finds the optimal nonlinear least squares solution ( ..."
Abstract
 Add to MetaCart
We present an extension of a smoothing approach to Simultaneous Localization and Mapping (SLAM). We have previously introduced SquareRoot SAM, a Smoothing and Mapping approach to SLAM based on LevenbergMarquardt (LM) optimization. It iteratively finds the optimal nonlinear least squares solution (ML), where one iteration comprises of a linearization step, a matrix factorization, and a backsubstitution step. We introduce a submap parametrization which enables a rigid transformation of parts relative to each other during the optimization process. This parameterization is used in a multifrontal QR factorization approach, in which we partially fix the linearization point for a subset of the unknowns corresponding to submaps. This greatly accelerates the optimization of an entire SAM graph yet yields
Library and Information Services
, 1996
"... Enquiries about copyright, reproduction and requests for ..."
Solving Rank Deficient LinearLeast Squares Problems using Sparse QR
"... We address the problem of solving linear leastsquares problems min——Ax−b—— when A is a sparse mbyn rank deficient or highly illconditioned matrix. When A is rank deficient, there is an entire subspace of minimizers. When A is full rank but highly illconditioned, there is a single minimizer, but ..."
Abstract
 Add to MetaCart
We address the problem of solving linear leastsquares problems min——Ax−b—— when A is a sparse mbyn rank deficient or highly illconditioned matrix. When A is rank deficient, there is an entire subspace of minimizers. When A is full rank but highly illconditioned, there is a single minimizer, but there are many x’s that give almost the same residual norm. Of these minimizers or almostminimizers, the user usually prefers a solution with a small norm. When A has full rank the problem can be solved efficiently using a direct solver based on the QR factorization. When A is rankdeficient or highly illconditioned the factorization A = QR is not useful because the computed R is illconditioned. This usually leads to a solution with a huge norm. The singularvalue decomposition (SVD) and rankrevealing QR factorizations can produce minimalnorm solutions, but they are difficult to compute in the sparse case. Currently there are no sparse SVD algorithms, and sparse rankrevealing QR factorizations can lead to excessive fill and only few implementations are available.