Results 11  20
of
25
Approximating Dominant Singular Triplets of Large Sparse Matrices via Modified Moments
 Numer. Algorithms
, 1996
"... this paper reflect the use of 2cyclic iteration matrices as defined in Equation (2). The three main steps that constitute the CSIMSVD algorithm are: 1. calculation of the CSIiterate using Equations (26) and (27), 2. calculation of the new moments for the current iterate, and 3. updating the bidia ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
this paper reflect the use of 2cyclic iteration matrices as defined in Equation (2). The three main steps that constitute the CSIMSVD algorithm are: 1. calculation of the CSIiterate using Equations (26) and (27), 2. calculation of the new moments for the current iterate, and 3. updating the bidiagonal matrix and approximating the eigenvalues of the twocyclic iteration matrix through the QRiteration. Figure 2 shows the dependencies involved in the steps of the above outlined procedure. The pipelined nature of the computation indicates that Steps 1, 2, and 3 described could be carried out concurrently. For example, the computation of the antidiagonal elements OE 13 ; OE 22 ; OE 15 ; OE 24 ; OE 33 (shown in the box labeled PHI in Figure 2) could be overlapped with the computation of the iterates ¸
Computation of the Singular Subspace Associated With the Smallest Singular Values of Large Matrices
, 1993
"... We compare the blockLanczos and the Davidson methods for computing a basis of a singular subspace associated with the smallest singular values of large matrices. We introduce a simple modification on the preconditioning step of Davidson's method which appears to be efficient on a range of large sp ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
We compare the blockLanczos and the Davidson methods for computing a basis of a singular subspace associated with the smallest singular values of large matrices. We introduce a simple modification on the preconditioning step of Davidson's method which appears to be efficient on a range of large sparse matrices.
Nonequivalence deflation for the solution of matrix latent value problems
, 1995
"... The following nonlinear latent value problem is studied: F(h)x = 0, where F(h) is an n X n analytic nondefective matrix function in the scalar A. The latent pair (A, x) has been previously found by applying Newton’s method to a certain equation. The deflation technique is required for finding anothe ..."
Abstract

Cited by 4 (3 self)
 Add to MetaCart
The following nonlinear latent value problem is studied: F(h)x = 0, where F(h) is an n X n analytic nondefective matrix function in the scalar A. The latent pair (A, x) has been previously found by applying Newton’s method to a certain equation. The deflation technique is required for finding another latent pair starting from a computed latent pair. Several deflation strategies are examined, and the nonequivalence deflation technique is developed. It is demonstrated, by analysis and numerical experience, to be a reliable and efficient strategy for finding a few latent roots in a given region. 1.
SVDPACKC (Version 1.0) User's Guide
"... SVDPACKC comprises four numerical (iterative) methods for computing the singular value decomposition (SVD) of large sparse matrices using ANSI C. This software package implements Lanczos and subspace iterationbased methods for determining several of the largest singular triplets (singular values an ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
SVDPACKC comprises four numerical (iterative) methods for computing the singular value decomposition (SVD) of large sparse matrices using ANSI C. This software package implements Lanczos and subspace iterationbased methods for determining several of the largest singular triplets (singular values and corresponding left and rightsingular vectors) for large sparse matrices. The package has been ported to a variety of machines ranging from supercomputers to workstations: CRAY YMP, IBM RS/6000550, DEC 5000100, HP 9000750, SPARCstation 2, and Macintosh II/fx. This document (i) explains each algorithm in some detail, (ii) explains the input parameters for each program, (iii) explains how to compile/execute each program, and (iv) illustrates the performance of each method when we compute lower rank approximations to sparse termdocument matrices from information retrieval applications. A userfriendly software interface to the package for UNIXbased systems and the Macintosh II/fx is als...
A WEIGHTEDGCV METHOD FOR LANCZOSHYBRID REGULARIZATION
, 2008
"... Lanczoshybrid regularization methods have been proposed as effective approaches for solving largescale illposed inverse problems. Lanczos methods restrict the solution to lie in a Krylov subspace, but they are hindered by semiconvergence behavior, in that the quality of the solution first incre ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
Lanczoshybrid regularization methods have been proposed as effective approaches for solving largescale illposed inverse problems. Lanczos methods restrict the solution to lie in a Krylov subspace, but they are hindered by semiconvergence behavior, in that the quality of the solution first increases and then decreases. Hybrid methods apply a standard regularization technique, such as Tikhonov regularization, to the projected problem at each iteration. Thus, regularization in hybrid methods is achieved both by Krylov filtering and by appropriate choice of a regularization parameter at each iteration. In this paper we describe a weighted generalized cross validation (WGCV) method for choosing the parameter. Using this method we demonstrate that the semiconvergence behavior of the Lanczos method can be overcome, making the solution less sensitive to the number of iterations.
Block KrylovSchur method for large symmetric eigenvalue problems, tech
, 2004
"... Abstract. Stewart’s recent KrylovSchur algorithm offers two advantages over Sorensen’s implicitly restarted Arnoldi (IRA) algorithm. The first is ease of deflation of converged Ritz vectors, the second is the avoidance of the potential forward instability of the QR algorithm. In this paper we devel ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
Abstract. Stewart’s recent KrylovSchur algorithm offers two advantages over Sorensen’s implicitly restarted Arnoldi (IRA) algorithm. The first is ease of deflation of converged Ritz vectors, the second is the avoidance of the potential forward instability of the QR algorithm. In this paper we develop a block version of the KrylovSchur algorithm for symmetric eigenproblems. Details of this block algorithm are discussed, including how to handle the rank deficient cases and how to use different block sizes. Numerical results on the efficiency of the block KrylovSchur method are reported.
Estimating the Largest Singular Values/Vectors of Large Sparse Matrices via Modified Moments
, 1996
"... This dissertation considers algorithms for determining a few of the largest singular values and corresponding vectors of large sparse matrices by solving equivalent eigenvalue problems. The procedure is based on a method by Golub and Kent for estimating eigenvalues of equvalent eigensystems using mo ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
This dissertation considers algorithms for determining a few of the largest singular values and corresponding vectors of large sparse matrices by solving equivalent eigenvalue problems. The procedure is based on a method by Golub and Kent for estimating eigenvalues of equvalent eigensystems using modified moments. The asynchronicity in the computations of moments and eigenvalues makes this method attractive for parallel implementations on a network of workstations. However, one potential drawback to this method is that there is no obvious relationship between the modified moments and the eigenvectors. The lack of eigenvector approximations makes deflation schemes difficult, and no robust implementation of the Golub/Kent scheme are currently used in practical applications. Methods to approximate both eigenvalues and eigenvectors using the theory of modified moments in conjunction with the Chebyshev semiiterative method are described in this disseratation. Deflation issues and implicit ...
Testing the nearest Kronecker product preconditioner on Markov chains and stochastic automata networks
 Informs Journal on Computing
"... informs ® doi 10.1287/ijoc.1030.0041 © 2004 INFORMS This paper is the experimental followup to Langville and Stewart (2002), where the theoretical background for the nearest Kronecker product (NKP) preconditioner was developed. Here we test the NKP preconditioner on both Markov chains (MCs) and sto ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
informs ® doi 10.1287/ijoc.1030.0041 © 2004 INFORMS This paper is the experimental followup to Langville and Stewart (2002), where the theoretical background for the nearest Kronecker product (NKP) preconditioner was developed. Here we test the NKP preconditioner on both Markov chains (MCs) and stochastic automata networks (SANs). We conclude that the NKP preconditioner is not appropriate for general MCs, but is very effective for a MC stored as a SAN.
Fast Calculation of Singular Values for MIMO Wireless Systems
, 2003
"... The standard (pointwise) linear channel model for MIMO wireless systems provides a simplistic mapping from antenna elements to continuous (operator) view point of wireless channels. Lowrank, highdimension sampling matrices generated by RayTracing may be used to estimate (with error) the "true" o ..."
Abstract
 Add to MetaCart
The standard (pointwise) linear channel model for MIMO wireless systems provides a simplistic mapping from antenna elements to continuous (operator) view point of wireless channels. Lowrank, highdimension sampling matrices generated by RayTracing may be used to estimate (with error) the "true" operator channel. In order to achieve reasonable estimation error bounds, intractably large dimension matrices must be used for Ray Tracing.
SEPARABLE NONLINEAR INVERSE PROBLEMS
"... Abstract. This paper considers an efficient iterative approach to solve separable nonlinear least squares problems that arise in large scale inverse problems. A variable projection GaussNewton method is used to solve the nonlinear least squares problem, and Tikhonov regularization is incorporated u ..."
Abstract
 Add to MetaCart
Abstract. This paper considers an efficient iterative approach to solve separable nonlinear least squares problems that arise in large scale inverse problems. A variable projection GaussNewton method is used to solve the nonlinear least squares problem, and Tikhonov regularization is incorporated using an iterative Lanczos hybrid scheme. Regularization parameters are chosen automatically using a weighted generalized cross validation method, thus providing a nonlinear solver that requires very little input from the user. An application from image deblurring illustrates the effectiveness of the resulting numerical scheme. Key words. GaussNewton method, illposed inverse problems, iterative methods, Lanczos bidiagonalization, hybrid method, Tikhonov regularization AMS Subject Classifications: 65F20, 65F30 1. Introduction. Illposed inverse problems arise in many important applications, including astrophysics, astronomy, medical imaging, geophysics, parameter identification, and inverse scattering; see, for example, [8, 16, 17, 41] and the references therein. In this paper we consider large scale inverse problems of the form b = A(y true) x true + ε, (1.1)