Results 1  10
of
20
Choosing regularization parameters in iterative methods for illposed problems
 SIAM J. MATRIX ANAL. APPL
, 2001
"... Numerical solution of illposedproblems is often accomplishedby discretization (projection onto a finite dimensional subspace) followed by regularization. If the discrete problem has high dimension, though, typically we compute an approximate solution by projecting the discrete problem onto an even ..."
Abstract

Cited by 43 (6 self)
 Add to MetaCart
Numerical solution of illposedproblems is often accomplishedby discretization (projection onto a finite dimensional subspace) followed by regularization. If the discrete problem has high dimension, though, typically we compute an approximate solution by projecting the discrete problem onto an even smaller dimensional space, via iterative methods based on Krylov subspaces. In this work we present a common framework for efficient algorithms that regularize after this second projection rather than before it. We show that determining regularization parameters based on the final projectedproblem rather than on the original discretization has firmer justification andoften involves less computational expense. We prove some results on the approximate equivalence of this approach to other forms of regularization, andwe present numerical examples.
A trustregion approach to the regularization of largescale discrete forms of illposed problems
 SISC
, 2000
"... We consider largescale least squares problems where the coefficient matrix comes from the discretization of an operator in an illposed problem, and the righthand side contains noise. Special techniques known as regularization methods are needed to treat these problems in order to control the effe ..."
Abstract

Cited by 26 (6 self)
 Add to MetaCart
(Show Context)
We consider largescale least squares problems where the coefficient matrix comes from the discretization of an operator in an illposed problem, and the righthand side contains noise. Special techniques known as regularization methods are needed to treat these problems in order to control the effect of the noise on the solution. We pose the regularization problem as a quadratically constrained least squares problem. This formulation is equivalent to Tikhonov regularization, and we note that it is also a special case of the trustregion subproblem from optimization. We analyze the trustregion subproblem in the regularization case, and we consider the nontrivial extensions of a recently developed method for general largescale subproblems that will allow us to handle this case. The method relies on matrixvector products only, has low and fixed storage requirements, and can handle the singularities arising in illposed problems. We present numerical results on test problems, on an
Augmented implicitly restarted Lanczos bidiagonalization methods
 SIAM J. Sci. Comput
"... Abstract. New restarted Lanczos bidiagonalization methods for the computation of a few of the largest or smallest singular values of a large matrix are presented. Restarting is carried out by augmentation of Krylov subspaces that arise naturally in the standard Lanczos bidiagonalization method. The ..."
Abstract

Cited by 22 (9 self)
 Add to MetaCart
(Show Context)
Abstract. New restarted Lanczos bidiagonalization methods for the computation of a few of the largest or smallest singular values of a large matrix are presented. Restarting is carried out by augmentation of Krylov subspaces that arise naturally in the standard Lanczos bidiagonalization method. The augmenting vectors are associated with certain Ritz or harmonic Ritz vectors. Computed examples show the new methods to be competitive with available schemes. Key words. singular value computation, partial singular value decomposition, iterative method, largescale computation
A LargeScale TrustRegion Approach to the Regularization of Discrete IllPosed Problems
 RICE UNIVERSITY
, 1998
"... We consider the problem of computing the solution of largescale discrete illposed problems when there is noise in the data. These problems arise in important areas such as seismic inversion, medical imaging and signal processing. We pose the problem as a quadratically constrained least squares pro ..."
Abstract

Cited by 17 (6 self)
 Add to MetaCart
(Show Context)
We consider the problem of computing the solution of largescale discrete illposed problems when there is noise in the data. These problems arise in important areas such as seismic inversion, medical imaging and signal processing. We pose the problem as a quadratically constrained least squares problem and develop a method for the solution of such problem. Our method does not require factorization of the coefficient matrix, it has very low storage requirements and handles the high degree of singularities arising in discrete illposed problems. We present numerical results on test problems and an application of the method to a practical problem with real data.
Core problems in linear algebraic systems
 SIAM. J. MATRIX ANAL. APPL
, 2006
"... For any linear system Ax ≈ b we define a set of core problems and show that the orthogonal upper bidiagonalization of [b, A] gives such a core problem. In particular we show that these core problems have desirable properties such as minimal dimensions. When a total least squares problem is solved b ..."
Abstract

Cited by 16 (1 self)
 Add to MetaCart
For any linear system Ax ≈ b we define a set of core problems and show that the orthogonal upper bidiagonalization of [b, A] gives such a core problem. In particular we show that these core problems have desirable properties such as minimal dimensions. When a total least squares problem is solved by first finding a core problem, we show the resulting theory is consistent with earlier generalizations, but much simpler and clearer. The approach is important for other related solutions and leads, for example, to an elegant solution to the data least squares problem. The ideas could be useful for solving illposed problems.
Restarted block Lanczos bidiagonalization methods, Numer. Algorithms
"... Abstract. The problem of computing a few of the largest or smallest singular values and associated singular vectors of a large matrix arises in many applications. This paper describes restarted block Lanczos bidiagonalization methods based on augmentation of Ritz vectors or harmonic Ritz vectors by ..."
Abstract

Cited by 13 (6 self)
 Add to MetaCart
(Show Context)
Abstract. The problem of computing a few of the largest or smallest singular values and associated singular vectors of a large matrix arises in many applications. This paper describes restarted block Lanczos bidiagonalization methods based on augmentation of Ritz vectors or harmonic Ritz vectors by block Krylov subspaces. Key words. partial singular value decomposition, restarted iterative method, implicit shifts, augmentation. AMS subject classifications. 65F15, 15A18
A WEIGHTEDGCV METHOD FOR LANCZOSHYBRID REGULARIZATION
, 2008
"... Lanczoshybrid regularization methods have been proposed as effective approaches for solving largescale illposed inverse problems. Lanczos methods restrict the solution to lie in a Krylov subspace, but they are hindered by semiconvergence behavior, in that the quality of the solution first incre ..."
Abstract

Cited by 6 (1 self)
 Add to MetaCart
Lanczoshybrid regularization methods have been proposed as effective approaches for solving largescale illposed inverse problems. Lanczos methods restrict the solution to lie in a Krylov subspace, but they are hindered by semiconvergence behavior, in that the quality of the solution first increases and then decreases. Hybrid methods apply a standard regularization technique, such as Tikhonov regularization, to the projected problem at each iteration. Thus, regularization in hybrid methods is achieved both by Krylov filtering and by appropriate choice of a regularization parameter at each iteration. In this paper we describe a weighted generalized cross validation (WGCV) method for choosing the parameter. Using this method we demonstrate that the semiconvergence behavior of the Lanczos method can be overcome, making the solution less sensitive to the number of iterations.
Regularization Tools for Training FeedForward Neural Networks Part II: Largescale problems
, 1996
"... this paper, we propose optimization methods explicitly applied to the nonlinear regularized problem for largescale problems. To be specific, we formulate ..."
Abstract

Cited by 4 (3 self)
 Add to MetaCart
this paper, we propose optimization methods explicitly applied to the nonlinear regularized problem for largescale problems. To be specific, we formulate
Parallel resolvent Monte Carlo algorithms for linear algebra problems
, 2005
"... In this paper we consider Monte Carlo (MC) algorithms based on the use of the resolvent matrix for solving linear algebraic problems. Estimates for the speedup and efficiency of the algorithms are presented. Some numerical examples performed on cluster of workstations using MPI are given. ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
In this paper we consider Monte Carlo (MC) algorithms based on the use of the resolvent matrix for solving linear algebraic problems. Estimates for the speedup and efficiency of the algorithms are presented. Some numerical examples performed on cluster of workstations using MPI are given.
A ROBUST AND EFFICIENT PARALLEL SVD SOLVER BASED ON RESTARTED LANCZOS BIDIAGONALIZATION ∗
"... Abstract. Lanczos bidiagonalization is a competitive method for computing a partial singular value decomposition of a large sparse matrix, that is, when only a subset of the singular values and corresponding singular vectors are required. However, a straightforward implementation of the algorithm ha ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
(Show Context)
Abstract. Lanczos bidiagonalization is a competitive method for computing a partial singular value decomposition of a large sparse matrix, that is, when only a subset of the singular values and corresponding singular vectors are required. However, a straightforward implementation of the algorithm has the problem of loss of orthogonality between computed Lanczos vectors, and some reorthogonalization technique must be applied. Also, an effective restarting strategy must be used to prevent excessive growth of the cost of reorthogonalization per iteration. On the other hand, if the method is to be implemented on a distributedmemory parallel computer, then additional precautions are required so that parallel efficiency is maintained as the number of processors increases. In this paper, we present a Lanczos bidiagonalization procedure implemented in SLEPc, a software library for the solution of large, sparse eigenvalue problems on parallel computers. The solver is numerically robust and scales well up to hundreds of processors. Key words. Partial singular value decomposition, Lanczos bidiagonalization, thick restart, parallel computing. AMS subject classifications. 65F15, 15A18, 65F50. 1. Introduction. The