Results 1  10
of
19
Choosing regularization parameters in iterative methods for illposed problems
 SIAM J. MATRIX ANAL. APPL
, 2001
"... Numerical solution of illposedproblems is often accomplishedby discretization (projection onto a finite dimensional subspace) followed by regularization. If the discrete problem has high dimension, though, typically we compute an approximate solution by projecting the discrete problem onto an even ..."
Abstract

Cited by 36 (6 self)
 Add to MetaCart
Numerical solution of illposedproblems is often accomplishedby discretization (projection onto a finite dimensional subspace) followed by regularization. If the discrete problem has high dimension, though, typically we compute an approximate solution by projecting the discrete problem onto an even smaller dimensional space, via iterative methods based on Krylov subspaces. In this work we present a common framework for efficient algorithms that regularize after this second projection rather than before it. We show that determining regularization parameters based on the final projectedproblem rather than on the original discretization has firmer justification andoften involves less computational expense. We prove some results on the approximate equivalence of this approach to other forms of regularization, andwe present numerical examples.
A trustregion approach to the regularization of largescale discrete forms of illposed problems
 SISC
, 2000
"... We consider largescale least squares problems where the coefficient matrix comes from the discretization of an operator in an illposed problem, and the righthand side contains noise. Special techniques known as regularization methods are needed to treat these problems in order to control the effe ..."
Abstract

Cited by 23 (6 self)
 Add to MetaCart
We consider largescale least squares problems where the coefficient matrix comes from the discretization of an operator in an illposed problem, and the righthand side contains noise. Special techniques known as regularization methods are needed to treat these problems in order to control the effect of the noise on the solution. We pose the regularization problem as a quadratically constrained least squares problem. This formulation is equivalent to Tikhonov regularization, and we note that it is also a special case of the trustregion subproblem from optimization. We analyze the trustregion subproblem in the regularization case, and we consider the nontrivial extensions of a recently developed method for general largescale subproblems that will allow us to handle this case. The method relies on matrixvector products only, has low and fixed storage requirements, and can handle the singularities arising in illposed problems. We present numerical results on test problems, on an
Augmented implicitly restarted Lanczos bidiagonalization methods
 SIAM J. Sci. Comput
"... Abstract. New restarted Lanczos bidiagonalization methods for the computation of a few of the largest or smallest singular values of a large matrix are presented. Restarting is carried out by augmentation of Krylov subspaces that arise naturally in the standard Lanczos bidiagonalization method. The ..."
Abstract

Cited by 18 (9 self)
 Add to MetaCart
Abstract. New restarted Lanczos bidiagonalization methods for the computation of a few of the largest or smallest singular values of a large matrix are presented. Restarting is carried out by augmentation of Krylov subspaces that arise naturally in the standard Lanczos bidiagonalization method. The augmenting vectors are associated with certain Ritz or harmonic Ritz vectors. Computed examples show the new methods to be competitive with available schemes. Key words. singular value computation, partial singular value decomposition, iterative method, largescale computation
Core problems in linear algebraic systems
 SIAM. J. MATRIX ANAL. APPL
, 2006
"... For any linear system Ax ≈ b we define a set of core problems and show that the orthogonal upper bidiagonalization of [b, A] gives such a core problem. In particular we show that these core problems have desirable properties such as minimal dimensions. When a total least squares problem is solved b ..."
Abstract

Cited by 15 (1 self)
 Add to MetaCart
For any linear system Ax ≈ b we define a set of core problems and show that the orthogonal upper bidiagonalization of [b, A] gives such a core problem. In particular we show that these core problems have desirable properties such as minimal dimensions. When a total least squares problem is solved by first finding a core problem, we show the resulting theory is consistent with earlier generalizations, but much simpler and clearer. The approach is important for other related solutions and leads, for example, to an elegant solution to the data least squares problem. The ideas could be useful for solving illposed problems.
A LargeScale TrustRegion Approach to the Regularization of Discrete IllPosed Problems
 RICE UNIVERSITY
, 1998
"... We consider the problem of computing the solution of largescale discrete illposed problems when there is noise in the data. These problems arise in important areas such as seismic inversion, medical imaging and signal processing. We pose the problem as a quadratically constrained least squares pro ..."
Abstract

Cited by 14 (6 self)
 Add to MetaCart
We consider the problem of computing the solution of largescale discrete illposed problems when there is noise in the data. These problems arise in important areas such as seismic inversion, medical imaging and signal processing. We pose the problem as a quadratically constrained least squares problem and develop a method for the solution of such problem. Our method does not require factorization of the coefficient matrix, it has very low storage requirements and handles the high degree of singularities arising in discrete illposed problems. We present numerical results on test problems and an application of the method to a practical problem with real data.
Restarted block Lanczos bidiagonalization methods, Numer. Algorithms
"... Abstract. The problem of computing a few of the largest or smallest singular values and associated singular vectors of a large matrix arises in many applications. This paper describes restarted block Lanczos bidiagonalization methods based on augmentation of Ritz vectors or harmonic Ritz vectors by ..."
Abstract

Cited by 10 (6 self)
 Add to MetaCart
Abstract. The problem of computing a few of the largest or smallest singular values and associated singular vectors of a large matrix arises in many applications. This paper describes restarted block Lanczos bidiagonalization methods based on augmentation of Ritz vectors or harmonic Ritz vectors by block Krylov subspaces. Key words. partial singular value decomposition, restarted iterative method, implicit shifts, augmentation. AMS subject classifications. 65F15, 15A18
Regularization Tools for Training FeedForward Neural Networks Part II: Largescale problems
, 1996
"... this paper, we propose optimization methods explicitly applied to the nonlinear regularized problem for largescale problems. To be specific, we formulate ..."
Abstract

Cited by 4 (3 self)
 Add to MetaCart
this paper, we propose optimization methods explicitly applied to the nonlinear regularized problem for largescale problems. To be specific, we formulate
Parallel resolvent Monte Carlo algorithms for linear algebra problems
, 2005
"... In this paper we consider Monte Carlo (MC) algorithms based on the use of the resolvent matrix for solving linear algebraic problems. Estimates for the speedup and efficiency of the algorithms are presented. Some numerical examples performed on cluster of workstations using MPI are given. ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
In this paper we consider Monte Carlo (MC) algorithms based on the use of the resolvent matrix for solving linear algebraic problems. Estimates for the speedup and efficiency of the algorithms are presented. Some numerical examples performed on cluster of workstations using MPI are given.
A WEIGHTEDGCV METHOD FOR LANCZOSHYBRID REGULARIZATION
, 2008
"... Lanczoshybrid regularization methods have been proposed as effective approaches for solving largescale illposed inverse problems. Lanczos methods restrict the solution to lie in a Krylov subspace, but they are hindered by semiconvergence behavior, in that the quality of the solution first incre ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
Lanczoshybrid regularization methods have been proposed as effective approaches for solving largescale illposed inverse problems. Lanczos methods restrict the solution to lie in a Krylov subspace, but they are hindered by semiconvergence behavior, in that the quality of the solution first increases and then decreases. Hybrid methods apply a standard regularization technique, such as Tikhonov regularization, to the projected problem at each iteration. Thus, regularization in hybrid methods is achieved both by Krylov filtering and by appropriate choice of a regularization parameter at each iteration. In this paper we describe a weighted generalized cross validation (WGCV) method for choosing the parameter. Using this method we demonstrate that the semiconvergence behavior of the Lanczos method can be overcome, making the solution less sensitive to the number of iterations.
Regularization Tools for Training LargeScale Neural Networks
, 1996
"... We present regularization tools for training smallandmedium as well as largescale artificial feedforward neural networks. The determination of the weights leads to very illconditioned nonlinear least squares problems and regularization is often suggested to get control over the network complexit ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
We present regularization tools for training smallandmedium as well as largescale artificial feedforward neural networks. The determination of the weights leads to very illconditioned nonlinear least squares problems and regularization is often suggested to get control over the network complexity, small variance error, and nice optimization problems. The algorithms proposed solve explicitly a sequence of Tikhonov regularized nonlinear least squares problems. For smallandmedium size problems the GaussNewton method is applied to the regularized problem that is much more wellconditioned than the original problem, and exhibits far better convergence properties than a LevenbergMarquardt method. Numerical results presented also confirm that the proposed implementations are more reliable and efficient than the LevenbergMarquardt method. For largescale problems, methods using new special purpose automatic differentiation combined with conjugate gradient methods are proposed. The alg...