Results 11  20
of
33
Multiprocessor Sparse Svd Algorithms And Applications
, 1991
"... this memory is statically allocated, whereas on the Alliant FX/80 it is dynamically allocated as needed. On the Cray2S/4128, the vector z would be both retrieved from and written to core memory. However, on the Alliant FX/80, z may be fetched and held in the 512 kilobyte cache. Since memory accesse ..."
Abstract

Cited by 8 (3 self)
 Add to MetaCart
this memory is statically allocated, whereas on the Alliant FX/80 it is dynamically allocated as needed. On the Cray2S/4128, the vector z would be both retrieved from and written to core memory. However, on the Alliant FX/80, z may be fetched and held in the 512 kilobyte cache. Since memory accesses from the cache (fast local memory) can almost twice as fast as those from the larger globallyshared memory, we achieve an overall higher computational rate for multiplication by A
Computation of the Singular Subspace Associated With the Smallest Singular Values of Large Matrices
, 1993
"... We compare the blockLanczos and the Davidson methods for computing a basis of a singular subspace associated with the smallest singular values of large matrices. We introduce a simple modification on the preconditioning step of Davidson's method which appears to be efficient on a range of lar ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
We compare the blockLanczos and the Davidson methods for computing a basis of a singular subspace associated with the smallest singular values of large matrices. We introduce a simple modification on the preconditioning step of Davidson's method which appears to be efficient on a range of large sparse matrices.
Approximating Dominant Singular Triplets of Large Sparse Matrices via Modified Moments
 Numer. Algorithms
, 1996
"... this paper reflect the use of 2cyclic iteration matrices as defined in Equation (2). The three main steps that constitute the CSIMSVD algorithm are: 1. calculation of the CSIiterate using Equations (26) and (27), 2. calculation of the new moments for the current iterate, and 3. updating the bidia ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
(Show Context)
this paper reflect the use of 2cyclic iteration matrices as defined in Equation (2). The three main steps that constitute the CSIMSVD algorithm are: 1. calculation of the CSIiterate using Equations (26) and (27), 2. calculation of the new moments for the current iterate, and 3. updating the bidiagonal matrix and approximating the eigenvalues of the twocyclic iteration matrix through the QRiteration. Figure 2 shows the dependencies involved in the steps of the above outlined procedure. The pipelined nature of the computation indicates that Steps 1, 2, and 3 described could be carried out concurrently. For example, the computation of the antidiagonal elements OE 13 ; OE 22 ; OE 15 ; OE 24 ; OE 33 (shown in the box labeled PHI in Figure 2) could be overlapped with the computation of the iterates ¸
Nonequivalence deflation for the solution of matrix latent value problems
, 1995
"... The following nonlinear latent value problem is studied: F(h)x = 0, where F(h) is an n X n analytic nondefective matrix function in the scalar A. The latent pair (A, x) has been previously found by applying Newton’s method to a certain equation. The deflation technique is required for finding anothe ..."
Abstract

Cited by 4 (3 self)
 Add to MetaCart
The following nonlinear latent value problem is studied: F(h)x = 0, where F(h) is an n X n analytic nondefective matrix function in the scalar A. The latent pair (A, x) has been previously found by applying Newton’s method to a certain equation. The deflation technique is required for finding another latent pair starting from a computed latent pair. Several deflation strategies are examined, and the nonequivalence deflation technique is developed. It is demonstrated, by analysis and numerical experience, to be a reliable and efficient strategy for finding a few latent roots in a given region. 1.
Block KrylovSchur method for large symmetric eigenvalue problems, tech
, 2004
"... Abstract. Stewart’s recent KrylovSchur algorithm offers two advantages over Sorensen’s implicitly restarted Arnoldi (IRA) algorithm. The first is ease of deflation of converged Ritz vectors, the second is the avoidance of the potential forward instability of the QR algorithm. In this paper we devel ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
(Show Context)
Abstract. Stewart’s recent KrylovSchur algorithm offers two advantages over Sorensen’s implicitly restarted Arnoldi (IRA) algorithm. The first is ease of deflation of converged Ritz vectors, the second is the avoidance of the potential forward instability of the QR algorithm. In this paper we develop a block version of the KrylovSchur algorithm for symmetric eigenproblems. Details of this block algorithm are discussed, including how to handle the rank deficient cases and how to use different block sizes. Numerical results on the efficiency of the block KrylovSchur method are reported.
A WEIGHTEDGCV METHOD FOR LANCZOSHYBRID REGULARIZATION
, 2008
"... Lanczoshybrid regularization methods have been proposed as effective approaches for solving largescale illposed inverse problems. Lanczos methods restrict the solution to lie in a Krylov subspace, but they are hindered by semiconvergence behavior, in that the quality of the solution first incre ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
(Show Context)
Lanczoshybrid regularization methods have been proposed as effective approaches for solving largescale illposed inverse problems. Lanczos methods restrict the solution to lie in a Krylov subspace, but they are hindered by semiconvergence behavior, in that the quality of the solution first increases and then decreases. Hybrid methods apply a standard regularization technique, such as Tikhonov regularization, to the projected problem at each iteration. Thus, regularization in hybrid methods is achieved both by Krylov filtering and by appropriate choice of a regularization parameter at each iteration. In this paper we describe a weighted generalized cross validation (WGCV) method for choosing the parameter. Using this method we demonstrate that the semiconvergence behavior of the Lanczos method can be overcome, making the solution less sensitive to the number of iterations.
Testing the nearest Kronecker product preconditioner on Markov chains and stochastic automata networks
 Informs Journal on Computing
"... informs ® doi 10.1287/ijoc.1030.0041 © 2004 INFORMS This paper is the experimental followup to Langville and Stewart (2002), where the theoretical background for the nearest Kronecker product (NKP) preconditioner was developed. Here we test the NKP preconditioner on both Markov chains (MCs) and sto ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
informs ® doi 10.1287/ijoc.1030.0041 © 2004 INFORMS This paper is the experimental followup to Langville and Stewart (2002), where the theoretical background for the nearest Kronecker product (NKP) preconditioner was developed. Here we test the NKP preconditioner on both Markov chains (MCs) and stochastic automata networks (SANs). We conclude that the NKP preconditioner is not appropriate for general MCs, but is very effective for a MC stored as a SAN.
Estimating the Largest Singular Values/Vectors of Large Sparse Matrices via Modified Moments
, 1995
"... This dissertation considers algorithms for determining a few of the largest singular values and corresponding vectors of large sparse matrices by solving equivalent eigenvalue problems. The procedure is based on a method by Golub and Kent for estimating eigenvalues of equvalent eigensystems using mo ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
(Show Context)
This dissertation considers algorithms for determining a few of the largest singular values and corresponding vectors of large sparse matrices by solving equivalent eigenvalue problems. The procedure is based on a method by Golub and Kent for estimating eigenvalues of equvalent eigensystems using modified moments. The asynchronicity in the computations of moments and eigenvalues makes this method attractive for parallel implementations on a network of workstations. However, one potential drawback to this method is that there is no obvious relationship between the modified moments and the eigenvectors. The lack of eigenvector approximations makes deflation schemes difficult, and no robust implementation of the Golub/Kent scheme are currently used in practical applications. Methods to approximate both eigenvalues and eigenvectors using the theory of modified moments in conjunction with the Chebyshev semiiterative method are described in this disseratation. Deflation issues and implicit ...
ISSN 17499097Commentary on Selected Papers by Gene Golub on Matrix Factorizations and Applications ∗
, 2006
"... One of the fundamental tenets of numerical linear algebra is to exploit matrix factorizations. Doing so has numerous benefits, ranging from allowing clearer analysis and deeper understanding to simplifying the efficient implementation of algorithms. Textbooks in numerical analysis and matrix analysi ..."
Abstract
 Add to MetaCart
(Show Context)
One of the fundamental tenets of numerical linear algebra is to exploit matrix factorizations. Doing so has numerous benefits, ranging from allowing clearer analysis and deeper understanding to simplifying the efficient implementation of algorithms. Textbooks in numerical analysis and matrix analysis nowadays maximize the use of matrix factorizations, but this was not so in the first half of the 20th century. Golub has done as much as anyone to promulgate the benefits of matrix factorization, particularly the QR factorization and the singular value decomposition, and especially through his book Matrix Computations with Van Loan [28]. The five papers in this section illustrate several different facets of the matrix factorization paradigm. On direct methods for solving Poisson’s equations, by Buzbee, Golub, and Nielson [9] Cyclic reduction is a recurring topic in numerical analysis. In the context of solving a tridiagonal linear system of order 2n − 1, the idea is to eliminate the oddnumbered unknowns, thus halving the size of the system, and to continue this procedure recursively until a single equation remains. One unknown can now be solved for and the rest are
SEPARABLE NONLINEAR INVERSE PROBLEMS
"... Abstract. This paper considers an efficient iterative approach to solve separable nonlinear least squares problems that arise in large scale inverse problems. A variable projection GaussNewton method is used to solve the nonlinear least squares problem, and Tikhonov regularization is incorporated u ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract. This paper considers an efficient iterative approach to solve separable nonlinear least squares problems that arise in large scale inverse problems. A variable projection GaussNewton method is used to solve the nonlinear least squares problem, and Tikhonov regularization is incorporated using an iterative Lanczos hybrid scheme. Regularization parameters are chosen automatically using a weighted generalized cross validation method, thus providing a nonlinear solver that requires very little input from the user. An application from image deblurring illustrates the effectiveness of the resulting numerical scheme. Key words. GaussNewton method, illposed inverse problems, iterative methods, Lanczos bidiagonalization, hybrid method, Tikhonov regularization AMS Subject Classifications: 65F20, 65F30 1. Introduction. Illposed inverse problems arise in many important applications, including astrophysics, astronomy, medical imaging, geophysics, parameter identification, and inverse scattering; see, for example, [8, 16, 17, 41] and the references therein. In this paper we consider large scale inverse problems of the form b = A(y true) x true + ε, (1.1)