Results 1 
8 of
8
BLENDENPIK: SUPERCHARGING LAPACK'S LEASTSQUARES SOLVER
"... Several innovative randomsampling and randommixing techniques for solving problems in linear algebra have been proposed in the last decade, but they have not yet made a significant impact on numerical linear algebra. We show that by using an high quality implementation of one of these techniques w ..."
Abstract

Cited by 41 (4 self)
 Add to MetaCart
(Show Context)
Several innovative randomsampling and randommixing techniques for solving problems in linear algebra have been proposed in the last decade, but they have not yet made a significant impact on numerical linear algebra. We show that by using an high quality implementation of one of these techniques we obtain a solver that performs extremely well in the traditional yardsticks of numerical linear algebra: it is significantly faster than highperformance implementations of existing stateoftheart algorithms, and it is numerically backward stable. More speci cally, we describe a leastsquare solver for dense highly overdetermined systems that achieves residuals similar to those of direct QR factorization based solvers (lapack), outperforms lapack by large factors, and scales significantly better than any QRbased solver.
ℓ1sparse Reconstruction of Sharp Point Set Surfaces
"... We introduce an ℓ1 sparse method for the reconstruction of a piecewise smooth point set surface. The technique is motivated by recent advancements in sparse signal reconstruction. The assumption underlying our work is that common objects, even geometrically complex ones, can typically be characteriz ..."
Abstract

Cited by 9 (3 self)
 Add to MetaCart
(Show Context)
We introduce an ℓ1 sparse method for the reconstruction of a piecewise smooth point set surface. The technique is motivated by recent advancements in sparse signal reconstruction. The assumption underlying our work is that common objects, even geometrically complex ones, can typically be characterized by a rather small number of features. This, in turn, naturally lends itself to incorporating the powerful notion of sparsity into the model. The sparse reconstruction principle gives rise to a reconstructed point set surface that consists mainly of smooth modes, with the residual of the objective function strongly concentrated near sharp features. Our technique is capable of recovering orientation and positions of highly noisy point sets. The global nature of the optimization yields a sparse solution and avoids local minima. Using an interior point logbarrier solver with a customized preconditioning scheme, the solver for the corresponding convex optimization problem is competitive and the results are of high quality.
1Sparse reconstruction of sharp point set surfaces
 ACM T. Graphic
"... We introduce an 1sparse method for the reconstruction of a piecewise smooth point set surface. The technique is motivated by recent advancements in sparse signal reconstruction. The assumption underlying our work is that common objects, even geometrically complex ones, can typically be characterize ..."
Abstract

Cited by 7 (1 self)
 Add to MetaCart
We introduce an 1sparse method for the reconstruction of a piecewise smooth point set surface. The technique is motivated by recent advancements in sparse signal reconstruction. The assumption underlying our work is that common objects, even geometrically complex ones, can typically be characterized by a rather small number of features. This, in turn, naturally lends itself to incorporating the powerful notion of sparsity into the model. The sparse reconstruction principle gives rise to a reconstructed point set surface that consists mainly of smooth modes, with the residual of the objective function strongly concentrated near sharp features. Our technique is capable of recovering orientation and positions of highly noisy point sets. The global nature of the optimization yields a sparse solution and avoids local minima. Using an interiorpoint logbarrier solver with a customized preconditioning scheme, the solver for the corresponding convex optimization problem is competitive and the results are of high quality.
Combinatorial preconditioners for scalar elliptic finiteelement problems
, 2006
"... Abstract. We present a new preconditioner for linear systems arising from finiteelements discretizations of scalar elliptic partial differential equations (pde’s). The solver splits the collection {Ke} of element matrices into a subset E(t) of matrices that are approximable by diagonallydominant m ..."
Abstract

Cited by 7 (3 self)
 Add to MetaCart
(Show Context)
Abstract. We present a new preconditioner for linear systems arising from finiteelements discretizations of scalar elliptic partial differential equations (pde’s). The solver splits the collection {Ke} of element matrices into a subset E(t) of matrices that are approximable by diagonallydominant matrices and a subset of matrices that are not approximable. The approximable Ke’s are approximated by diagonallydominant matrices Le’s that are scaled and assembled to form a global diagonallydominant matrix L = � e∈E(t) αeLe. A combinatorial graph algorithm approximates L by another diagonallydominant matrix M that is easier to factor. The sparsification M is scaled and added to the inapproximable elements; the sum γM + � e�∈E(t) Ke is factored and used as a preconditioner. When all the element matrices are approximable, which is often the case, the preconditioner is provably efficient. Experimental results show that on problems in which some of the Ke’s are ill conditioned, our new preconditioner is more effective than an algebraic multigrid solver, than an incompletefactorization preconditioner, and than a direct solver. 1.
SOLVING HERMITIAN POSITIVE DEFINITE SYSTEMS USING INDEFINITE INCOMPLETE FACTORIZATIONS
"... Abstract. Incomplete LDL ∗ factorizations sometimes produce an inde nite preconditioner even when the input matrix is Hermitian positive de nite. The two most popular iterative solvers for symmetric systems, CG and MINRES, cannot use such preconditioners; they require a positive de nite precondition ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract. Incomplete LDL ∗ factorizations sometimes produce an inde nite preconditioner even when the input matrix is Hermitian positive de nite. The two most popular iterative solvers for symmetric systems, CG and MINRES, cannot use such preconditioners; they require a positive de nite preconditioner. One approach, that has been extensively studied to address this problem is to force positive de niteness by modifying the factorization process. We explore a di erent approach: use the incomplete factorization with a Krylov method that can accept an inde nite preconditioner. The conventional wisdom has been that long recurrence methods (like GMRES), or alternatively nonoptimal short recurrence methods (like symmetric QMR and BiCGStab) must be used if the preconditioner is inde nite. We explore the performance of these methods when used with an incomplete factorization, but also explore a less known Krylov method called PCGODIR that is both optimal and uses a short recurrence and can use an inde nite preconditioner. Furthermore, we propose another optimal short recurrence method called IPMINRES that can use an inde nite preconditioner, and a variant of PCGODIR, which we call IPCG, that is more numerically stable and usually requires fewer iterations. 1.
NEW KRYLOVSUBSPACE SOLVERS FOR HERMITIAN POSITIVE DEFINITE MATRICES WITH INDEFINITE PRECONDITIONERS
"... ..."
(Show Context)
Solving Rank Deficient LinearLeast Squares Problems using Sparse QR
"... We address the problem of solving linear leastsquares problems min——Ax−b—— when A is a sparse mbyn rank deficient or highly illconditioned matrix. When A is rank deficient, there is an entire subspace of minimizers. When A is full rank but highly illconditioned, there is a single minimizer, but ..."
Abstract
 Add to MetaCart
(Show Context)
We address the problem of solving linear leastsquares problems min——Ax−b—— when A is a sparse mbyn rank deficient or highly illconditioned matrix. When A is rank deficient, there is an entire subspace of minimizers. When A is full rank but highly illconditioned, there is a single minimizer, but there are many x’s that give almost the same residual norm. Of these minimizers or almostminimizers, the user usually prefers a solution with a small norm. When A has full rank the problem can be solved efficiently using a direct solver based on the QR factorization. When A is rankdeficient or highly illconditioned the factorization A = QR is not useful because the computed R is illconditioned. This usually leads to a solution with a huge norm. The singularvalue decomposition (SVD) and rankrevealing QR factorizations can produce minimalnorm solutions, but they are difficult to compute in the sparse case. Currently there are no sparse SVD algorithms, and sparse rankrevealing QR factorizations can lead to excessive fill and only few implementations are available.