Results 1  10
of
12
A column approximate minimum degree ordering algorithm
, 2000
"... Sparse Gaussian elimination with partial pivoting computes the factorization PAQ = LU of a sparse matrix A, where the row ordering P is selected during factorization using standard partial pivoting with row interchanges. The goal is to select a column preordering, Q, based solely on the nonzero patt ..."
Abstract

Cited by 251 (50 self)
 Add to MetaCart
Sparse Gaussian elimination with partial pivoting computes the factorization PAQ = LU of a sparse matrix A, where the row ordering P is selected during factorization using standard partial pivoting with row interchanges. The goal is to select a column preordering, Q, based solely on the nonzero pattern of A such that the factorization remains as sparse as possible, regardless of the subsequent choice of P. The choice of Q can have a dramatic impact on the number of nonzeros in L and U. One scheme for determining a good column ordering for A is to compute a symmetric ordering that reduces fillin in the Cholesky factorization of ATA. This approach, which requires the sparsity structure of ATA to be computed, can be expensive both in
Square Root SAM: Simultaneous localization and mapping via square root information smoothing
 International Journal of Robotics Reasearch
, 2006
"... Solving the SLAM problem is one way to enable a robot to explore, map, and navigate in a previously unknown environment. We investigate smoothing approaches as a viable alternative to extended Kalman filterbased solutions to the problem. In particular, we look at approaches that factorize either th ..."
Abstract

Cited by 78 (25 self)
 Add to MetaCart
Solving the SLAM problem is one way to enable a robot to explore, map, and navigate in a previously unknown environment. We investigate smoothing approaches as a viable alternative to extended Kalman filterbased solutions to the problem. In particular, we look at approaches that factorize either the associated information matrix or the measurement Jacobian into square root form. Such techniques have several significant advantages over the EKF: they are faster yet exact, they can be used in either batch or incremental mode, are better equipped to deal with nonlinear process and measurement models, and yield the entire robot trajectory, at lower cost for a large class of SLAM problems. In addition, in an indirect but dramatic way, column ordering heuristics automatically exploit the locality inherent in the geographic nature of the SLAM problem. In this paper we present the theory underlying these methods, along with an interpretation of factorization in terms of the graphical model associated with the SLAM problem. We present both simulation results and actual SLAM experiments in largescale environments that underscore the potential of these methods as an alternative to EKFbased approaches. 1
iSAM2: Incremental Smoothing and Mapping Using the Bayes Tree
"... We present a novel data structure, the Bayes tree, that provides an algorithmic foundation enabling a better understanding of existing graphical model inference algorithms and their connection to sparse matrix factorization methods. Similar to a clique tree, a Bayes tree encodes a factored probabili ..."
Abstract

Cited by 20 (10 self)
 Add to MetaCart
We present a novel data structure, the Bayes tree, that provides an algorithmic foundation enabling a better understanding of existing graphical model inference algorithms and their connection to sparse matrix factorization methods. Similar to a clique tree, a Bayes tree encodes a factored probability density, but unlike the clique tree it is directed and maps more naturally to the square root information matrix of the simultaneous localization and mapping (SLAM) problem. In this paper, we highlight three insights provided by our new data structure. First, the Bayes tree provides a better understanding of the matrix factorization in terms of probability densities. Second, we show how the fairly abstract updates to a matrix factorization translate to a simple editing of the Bayes tree and its conditional densities. Third, we apply the Bayes tree to obtain a completely novel algorithm for sparse nonlinear incremental optimization, named iSAM2, which achieves improvements in efficiency through incremental variable reordering and fluid relinearization, eliminating the need for periodic batch steps. We analyze various properties of iSAM2 in detail, and show on a range of real and simulated datasets that our algorithm compares favorably with other recent mapping algorithms in both quality and efficiency.
Exploiting locality by nested dissection for square root smoothing and mapping
 In Robotics: Science and Systems (RSS
, 2006
"... The problem of creating a map given only the erroneous odometry and feature measurements and locating the own position in this environment is known in the literature as the Simultaneous Localization and Mapping (SLAM) problem. In this paper we investigate how a Nested Dissection Ordering scheme can ..."
Abstract

Cited by 16 (9 self)
 Add to MetaCart
The problem of creating a map given only the erroneous odometry and feature measurements and locating the own position in this environment is known in the literature as the Simultaneous Localization and Mapping (SLAM) problem. In this paper we investigate how a Nested Dissection Ordering scheme can improve the the performance of a recently proposed Square Root Information Smoothing (SRIS) approach. As the SRIS does perform smoothing rather than filtering the SLAM problem becomes the Smoothing and Mapping problem (SAM). The computational complexity of the SRIS solution is dominated by the cost of transforming a matrix of all measurements into a square root form through factorization. The factorization of a fully dense measurement matrix has a cubic complexity in the worst case. We show that the computational complexity for the factorization of typical measurement matrices occurring in the SAM problem can be bound tighter under reasonable assumptions. Our work is motivated both from a numerical / linear algebra standpoint as well as by submaps used in EKF solutions to SLAM.
An Approximate Minimum Degree Column Ordering Algorithm
, 1998
"... An approximate minimum degree column ordering algorithm (COLAMD) for preordering an unsymmetric sparse matrix A prior to... ..."
Abstract

Cited by 10 (2 self)
 Add to MetaCart
An approximate minimum degree column ordering algorithm (COLAMD) for preordering an unsymmetric sparse matrix A prior to...
Factor graph based incremental smoothing in inertial navigation systems
 In Information Fusion (FUSION), International Conference on
, 2012
"... Abstract—This paper describes a new approach for information fusion in inertial navigation systems. In contrast to the commonly used filtering techniques, the proposed approach is based on a nonlinear optimization for processing incoming measurements from the inertial measurement unit (IMU) and any ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
Abstract—This paper describes a new approach for information fusion in inertial navigation systems. In contrast to the commonly used filtering techniques, the proposed approach is based on a nonlinear optimization for processing incoming measurements from the inertial measurement unit (IMU) and any other available sensors into a navigation solution. A factor graph formulation is introduced that allows multirate, asynchronous, and possibly delayed measurements to be incorporated in a natural way. This method, based on a recently developed incremental smoother, automatically determines the number of states to recompute at each step, effectively acting as an adaptive fixedlag smoother. This yields an efficient and general framework for information fusion, providing nearlyoptimal state estimates. In particular, incoming IMU measurements can be processed in real time regardless to the size of the graph. The proposed method is demonstrated in a simulated environment using IMU, GPS and stereo vision measurements and compared to the optimal solution obtained by a full nonlinear batch optimization and to a conventional extended Kalman filter (EKF). Index Terms—Navigation, information fusion, factor graph, filtering I.
Algebraic analysis of highpass quantization
 ACM TOG
, 2005
"... This article presents an algebraic analysis of a meshcompression technique called highpass quantization [Sorkine et al. 2003]. In highpass quantization, a rectangular matrix based on the mesh topological Laplacian is applied to the vectors of the Cartesian coordinates of a polygonal mesh. The res ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
This article presents an algebraic analysis of a meshcompression technique called highpass quantization [Sorkine et al. 2003]. In highpass quantization, a rectangular matrix based on the mesh topological Laplacian is applied to the vectors of the Cartesian coordinates of a polygonal mesh. The resulting vectors, called δcoordinates, are then quantized. The applied matrix is a function of the topology of the mesh and the indices of a small set of mesh vertices (anchors) but not of the location of the vertices. An approximation of the geometry can be reconstructed from the quantized δcoordinates and the spatial locations of the anchors. In this article, we show how to algebraically bound the reconstruction error that this method generates. We show that the small singular value of the transformation matrix can be used to bound both the quantization error and the rounding error which is due to the use of floatingpoint arithmetic. Furthermore, we prove a bound on this singular value. The bound is a function of the topology of the mesh and of the selected anchors. We also propose a new anchorselection algorithm, inspired by this bound. We show experimentally that the method is effective and that the computed upper bound on the error is not too pessimistic.
Computing sparse orthogonal factors in MATLAB
, 1998
"... In this report a new version of the multifrontal sparse QR factorization routine sqr, originally by Matstoms, for general sparse matrices is described and evaluated. In the previous version the orthogonal factor Q is discarded due to storage considerations. The new version provides Q and uses the mu ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
In this report a new version of the multifrontal sparse QR factorization routine sqr, originally by Matstoms, for general sparse matrices is described and evaluated. In the previous version the orthogonal factor Q is discarded due to storage considerations. The new version provides Q and uses the multifrontal structure to store this orthogonal factor in a compact way. A new data class with overloaded operators is implemented in Matlab to provide an easy usage of the compact orthogonal factors. This implicit way of storing the orthogonal factor also results in faster computation and application of Q and Q T . Examples are given, where the new version is up to four times faster when computing only R and up to 1000 times faster when computing both Q and R, than the builtin function qr in Matlab. The sqr package is available at URL: http://www.mai.liu.se/~milun/sls/. Key words: QR factorization, sparse problems, multifrontal method, orthogonal factorization. 1 Introduction. Let A 2 IR...
HighPass Quantization with Laplacian Coordinates
"... Any quantization introduces errors. An important question is how to suppress their visual effect. In this paper we present a new quantization method for the geometry of 3D meshes, which enables aggressive quantization without significant loss of visual quality. Conventionally, quantization is applie ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
Any quantization introduces errors. An important question is how to suppress their visual effect. In this paper we present a new quantization method for the geometry of 3D meshes, which enables aggressive quantization without significant loss of visual quality. Conventionally, quantization is applied directly to the 3space coordinates. This form of quantization introduces highfrequency errors into the model. Since highfrequency errors modify the appearance of the surface, they are highly noticeable, and commonly, this form of quantization must be done conservatively to preserve the precision of the coordinates. Our method first multiplies the coordinates by the Laplacian matrix of the mesh and quantizes the transformed coordinates which we call Laplacian coordinates or “δcoordinates”. We show that the highfrequency quantization errors in the δcoordinates are transformed into lowfrequency errors when the quantized δcoordinates are transformed back into standard Cartesian coordinates. These lowfrequency errors in the model are much less noticeable than the highfrequency errors. We call our strategy highpass quantization, to emphasize the fact that it tends to concentrate the quantization error at the lowfrequency end of the spectrum. To allow control over the shape and magnitude of the lowfrequency quantization errors, we extend the Laplacian matrix by adding a number of spatial constraints. We analyze the singular values of the extended matrix and derive bounds on the quantization and rounding errors. We show that the small singular values, and hence the errors, are related in a specific way to the number and location of the spatial constraints.
Reordering and Fluid Relinearization for Online Mapping
, 2010
"... In this paper we present a novel data structure, the Bayes tree, which exploits the connections between graphical model inference and sparse linear algebra. The proposed data structure provides a new perspective on an entire class of simultaneous localization and mapping (SLAM) algorithms. Similar t ..."
Abstract
 Add to MetaCart
In this paper we present a novel data structure, the Bayes tree, which exploits the connections between graphical model inference and sparse linear algebra. The proposed data structure provides a new perspective on an entire class of simultaneous localization and mapping (SLAM) algorithms. Similar to a junction tree, a Bayes tree encodes a factored probability density, but unlike the junction tree it is directed and maps more naturally to the square root information matrix of the SLAM problem. This makes it eminently suited to encode the sparse nature of the problem, especially in a smoothing and mapping (SAM) context. The inherent sparsity of SAM has already been exploited in the literature to produce efficient solutions in both batch and online mapping. The graphical model perspective allows us to develop a novel incremental algorithm that seamlessly incorporates reordering and relinearization. This obviates the need for expensive periodic batch operations from previous approaches, which negatively affect the performance and detract from the intended online nature of the algorithm. The new method is evaluated using simulated and realworld datasets in both landmark and pose SLAM settings. I.