Results 11  20
of
48
A.: Parallel Algorithms for the Singular Value Decomposition. In: Handbook on Parallel Computing and Statistics. Volume 184 of Statistics: A Series of Textbooks and Monographs
, 2006
"... ..."
(Show Context)
Analysis of the finite precision BiConjugate Gradient algorithm for nonsymmetric linear systems
 Math. Comp
, 1995
"... Abstract. In this paper we analyze the biconjugate gradient algorithm in finite precision arithmetic, and suggest reasons for its often observed robustness. By using a tridiagonal structure, which is preserved by the finite precision biconjugate gradient iteration, we are able to bound its residua ..."
Abstract

Cited by 10 (4 self)
 Add to MetaCart
(Show Context)
Abstract. In this paper we analyze the biconjugate gradient algorithm in finite precision arithmetic, and suggest reasons for its often observed robustness. By using a tridiagonal structure, which is preserved by the finite precision biconjugate gradient iteration, we are able to bound its residual norm by a minimum polynomial of a perturbed matrix (i.e. the residual norm of the exact GMRES applied to a perturbed matrix) multiplied by an amplification factor. This shows that occurrence of nearbreakdowns or loss of biorthogonality does not necessarily deter convergence of the residuals provided that the amplification factor remains bounded. Numerical examples are given to gain insights into these bounds. 1.
Large Scale Variational Inference and Experimental Design for Sparse Generalized Linear Models
, 2008
"... Sparsity is a fundamental concept of modern statistics, and often the only general principle available at the moment to address novel learning applications with many more variables than observations. While much progress has been made recently in the theoretical understanding and algorithmics of spa ..."
Abstract

Cited by 9 (5 self)
 Add to MetaCart
(Show Context)
Sparsity is a fundamental concept of modern statistics, and often the only general principle available at the moment to address novel learning applications with many more variables than observations. While much progress has been made recently in the theoretical understanding and algorithmics of sparse point estimation, higherorder problems such as covariance estimation or optimal data acquisition are seldomly addressed for sparsityfavouring models, and there are virtually no algorithms for large scale applications of these. We provide novel approximate Bayesian inference algorithms for sparse generalized linear models, that can be used with hundred thousands of variables, and run orders of magnitude faster than previous algorithms in domains where either apply. By analyzing our methods and establishing some novel convexity results, we settle a longstanding open question about variational Bayesian inference for continuous variable models: the Gaussian lower bound relaxation, which has been used previously for a range of models, is proved to be a convex optimization problem, if and only if the posterior mode is found by convex programming. Our algorithms reduce to the same computational primitives than commonly used sparse estimation methods do, but require Gaussian marginal variance estimation as well. We show how the Lanczos algorithm from numerical mathematics can be employed to compute the latter. We are interested in Bayesian experimental design here (which is mainly driven by efficient approximate inference), a powerful framework for optimizing measurement architectures of complex signals, such as natural images. Designs
Multiprocessor Sparse Svd Algorithms And Applications
, 1991
"... this memory is statically allocated, whereas on the Alliant FX/80 it is dynamically allocated as needed. On the Cray2S/4128, the vector z would be both retrieved from and written to core memory. However, on the Alliant FX/80, z may be fetched and held in the 512 kilobyte cache. Since memory accesse ..."
Abstract

Cited by 9 (3 self)
 Add to MetaCart
this memory is statically allocated, whereas on the Alliant FX/80 it is dynamically allocated as needed. On the Cray2S/4128, the vector z would be both retrieved from and written to core memory. However, on the Alliant FX/80, z may be fetched and held in the 512 kilobyte cache. Since memory accesses from the cache (fast local memory) can almost twice as fast as those from the larger globallyshared memory, we achieve an overall higher computational rate for multiplication by A
MINRESQLP: A Krylov subspace method for indefinite or singular symmetric systems
 SIAMJ.SCI.COMPUT.,TOAPPEAR
, 2011
"... CG, SYMMLQ, and MINRES are Krylov subspace methods for solving symmetric systems of linear equations. When these methods are applied to an incompatible system (that is, a singular symmetric leastsquares problem), CG could break down and SYMMLQ’s solution could explode, while MINRES would give a le ..."
Abstract

Cited by 7 (2 self)
 Add to MetaCart
(Show Context)
CG, SYMMLQ, and MINRES are Krylov subspace methods for solving symmetric systems of linear equations. When these methods are applied to an incompatible system (that is, a singular symmetric leastsquares problem), CG could break down and SYMMLQ’s solution could explode, while MINRES would give a leastsquares solution but not necessarily the minimumlength (pseudoinverse) solution. This understanding motivates us to design a MINRESlike algorithm to compute minimumlength solutions to singular symmetric systems. MINRES uses QR factors of the tridiagonal matrix from the Lanczos process (where R is uppertridiagonal). MINRESQLP uses a QLP decomposition (where rotations on the right reduce R to lowertridiagonal form). On illconditioned systems (singular or not), MINRESQLP can give more accurate solutions than MINRES. We derive preconditioned MINRESQLP, new stopping rules, and better estimates of the solution and residual norms, the matrix norm, and the condition number.
Conjugate Gradient Algorithms with Reduced Synchronization Overhead on Distributed Memory Multiprocessors
, 1999
"... The standard formulation of the conjugate gradient algorithm involves two inner product computations. The results of these two inner products are needed to update the search direction and the computed solution. Since these inner products are mutually interdependent, in a distributed memory parallel ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
(Show Context)
The standard formulation of the conjugate gradient algorithm involves two inner product computations. The results of these two inner products are needed to update the search direction and the computed solution. Since these inner products are mutually interdependent, in a distributed memory parallel environment their computation and subsequent distribution requires two separate communication and synchronization phases. In this paper, we present three related mathematically equivalent rearrangements of the standard algorithm that reduce the number of communication phases. We present empirical evidence that two of these rearrangements are numerically stable. This claim is further substantiated by a proof that one of the empirically stable rearrangements arises naturally in the symmetric Lanczos method for linear systems, which is equivalent to the conjugate gradient method.
Arnoldi versus Nonsymmetric Lanczos Algorithms for Solving Nonsymmetric Matrix Eigenvalue Problems
 BIT
, 1996
"... We obtain several results which may be useful in determining the convergence behavior of eigenvalue algorithms based upo n Arnoldi and nonsymmetric Lanczos recursions. We derive a relationship between nonsymmetric Lanczos eigenvalue procedures and Arnoldi eigenvalue procedures. We demonstrate that t ..."
Abstract

Cited by 7 (1 self)
 Add to MetaCart
(Show Context)
We obtain several results which may be useful in determining the convergence behavior of eigenvalue algorithms based upo n Arnoldi and nonsymmetric Lanczos recursions. We derive a relationship between nonsymmetric Lanczos eigenvalue procedures and Arnoldi eigenvalue procedures. We demonstrate that the Arnoldi recursions preserve a property which characterizes normal matrices, and that if we could determine the appropriate starting vectors, we could mimic the nonsymmetric Lanczos eigenvalue convergence on a general diagonalizable matrix by its convergence on related normal matrices. Using a unitary equivalence for each of these Krylov subspace methods, we define sets of test problems where we can easily vary certain spectral properties of the matrices. We use these and other test problems to examine the behavior of an Arnoldi and of a nonsymmetric Lanczos procedure. Mathematical Sciences Department, IBM Research Division, T.J. Watson Research Center, Yorktown Heights, NY 10598, USA, a...
Accurate Conjugate Gradient Methods for Families of Shifted Systems
, 2003
"... We present an e#cient and accurate variant of the conjugate gradient method for solving families of shifted systems. In particular we are interested in shifted systems that occur in Tikhonov regularization for inverse problems since these problems can be sensitive to roundo# errors. The success o ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
We present an e#cient and accurate variant of the conjugate gradient method for solving families of shifted systems. In particular we are interested in shifted systems that occur in Tikhonov regularization for inverse problems since these problems can be sensitive to roundo# errors. The success of our method in achieving accurate approximations is supported by theoretical arguments as well as several numerical experiments and we relate it to other implementations proposed in literature.