Results 1  10
of
44
The geometry of algorithms with orthogonality constraints
 SIAM J. MATRIX ANAL. APPL
, 1998
"... In this paper we develop new Newton and conjugate gradient algorithms on the Grassmann and Stiefel manifolds. These manifolds represent the constraints that arise in such areas as the symmetric eigenvalue problem, nonlinear eigenvalue problems, electronic structures computations, and signal proces ..."
Abstract

Cited by 383 (1 self)
 Add to MetaCart
In this paper we develop new Newton and conjugate gradient algorithms on the Grassmann and Stiefel manifolds. These manifolds represent the constraints that arise in such areas as the symmetric eigenvalue problem, nonlinear eigenvalue problems, electronic structures computations, and signal processing. In addition to the new algorithms, we show how the geometrical framework gives penetrating new insights allowing us to create, understand, and compare algorithms. The theory proposed here provides a taxonomy for numerical linear algebra algorithms that provide a top level mathematical view of previously unrelated algorithms. It is our hope that developers of new algorithms and perturbation theories will benefit from the theory, methods, and examples in this paper.
Robust Solutions To LeastSquares Problems With Uncertain Data
, 1997
"... . We consider leastsquares problems where the coefficient matrices A; b are unknownbutbounded. We minimize the worstcase residual error using (convex) secondorder cone programming, yielding an algorithm with complexity similar to one singular value decomposition of A. The method can be interpret ..."
Abstract

Cited by 149 (13 self)
 Add to MetaCart
. We consider leastsquares problems where the coefficient matrices A; b are unknownbutbounded. We minimize the worstcase residual error using (convex) secondorder cone programming, yielding an algorithm with complexity similar to one singular value decomposition of A. The method can be interpreted as a Tikhonov regularization procedure, with the advantage that it provides an exact bound on the robustness of solution, and a rigorous way to compute the regularization parameter. When the perturbation has a known (e.g., Toeplitz) structure, the same problem can be solved in polynomialtime using semidefinite programming (SDP). We also consider the case when A; b are rational functions of an unknownbutbounded perturbation vector. We show how to minimize (via SDP) upper bounds on the optimal worstcase residual. We provide numerical examples, including one from robust identification and one from robust interpolation. Key Words. Leastsquares, uncertainty, robustness, secondorder cone...
The LCurve and its Use in the Numerical Treatment of Inverse Problems
 in Computational Inverse Problems in Electrocardiology, ed. P. Johnston, Advances in Computational Bioengineering
, 2000
"... The Lcurve is a loglog plot of the norm of a regularized solution versus the norm of the corresponding residual norm. It is a convenient graphical tool for displaying the tradeoff between the size of a regularized solution and its fit to the given data, as the regularization parameter varies. The ..."
Abstract

Cited by 29 (2 self)
 Add to MetaCart
The Lcurve is a loglog plot of the norm of a regularized solution versus the norm of the corresponding residual norm. It is a convenient graphical tool for displaying the tradeoff between the size of a regularized solution and its fit to the given data, as the regularization parameter varies. The Lcurve thus gives insight into the regularizing properties of the underlying regularization method, and it is an aid in choosing an appropriate regularization parameter for the given data. In this chapter we summarize the main properties of the Lcurve, and demonstrate by examples its usefulness and its limitations both as an analysis tool and as a method for choosing the regularization parameter. 1 Introduction Practically all regularization methods for computing stable solutions to inverse problems involve a tradeoff between the "size" of the regularized solution and the quality of the fit that it provides to the given data. What distinguishes the various regularization methods is how...
Estimation Of The LCurve Via Lanczos Bidiagonalization
 BIT
, 1997
"... . The Lcurve criterion is often applied to determine a suitable value of the regularization parameter when solving illconditioned linear systems of equations with a righthand side contaminated by errors of unknown norm. However, the computation of the Lcurve is quite costly for large problems; t ..."
Abstract

Cited by 25 (13 self)
 Add to MetaCart
. The Lcurve criterion is often applied to determine a suitable value of the regularization parameter when solving illconditioned linear systems of equations with a righthand side contaminated by errors of unknown norm. However, the computation of the Lcurve is quite costly for large problems; the determination of a point on the Lcurve requires that both the norm of the regularized approximate solution and the norm of the corresponding residual vector be available. Therefore, usually only a few points on the Lcurve are computed, and these values rather than the Lcurve, are used to determine a value of the regularization parameter. We propose a new approach to determine a value of the regularization parameter based on computing an Lribbon that contains the Lcurve in its interior. An Lribbon can be computed fairly inexpensively by partial Lanczos bidiagonalization of the matrix of the given linear system of equations. A suitable value of the regularization parameter is then det...
A trustregion approach to the regularization of largescale discrete forms of illposed problems
 SISC
, 2000
"... We consider largescale least squares problems where the coefficient matrix comes from the discretization of an operator in an illposed problem, and the righthand side contains noise. Special techniques known as regularization methods are needed to treat these problems in order to control the effe ..."
Abstract

Cited by 21 (4 self)
 Add to MetaCart
We consider largescale least squares problems where the coefficient matrix comes from the discretization of an operator in an illposed problem, and the righthand side contains noise. Special techniques known as regularization methods are needed to treat these problems in order to control the effect of the noise on the solution. We pose the regularization problem as a quadratically constrained least squares problem. This formulation is equivalent to Tikhonov regularization, and we note that it is also a special case of the trustregion subproblem from optimization. We analyze the trustregion subproblem in the regularization case, and we consider the nontrivial extensions of a recently developed method for general largescale subproblems that will allow us to handle this case. The method relies on matrixvector products only, has low and fixed storage requirements, and can handle the singularities arising in illposed problems. We present numerical results on test problems, on an
A LargeScale TrustRegion Approach to the Regularization of Discrete IllPosed Problems
 RICE UNIVERSITY
, 1998
"... We consider the problem of computing the solution of largescale discrete illposed problems when there is noise in the data. These problems arise in important areas such as seismic inversion, medical imaging and signal processing. We pose the problem as a quadratically constrained least squares pro ..."
Abstract

Cited by 12 (4 self)
 Add to MetaCart
We consider the problem of computing the solution of largescale discrete illposed problems when there is noise in the data. These problems arise in important areas such as seismic inversion, medical imaging and signal processing. We pose the problem as a quadratically constrained least squares problem and develop a method for the solution of such problem. Our method does not require factorization of the coefficient matrix, it has very low storage requirements and handles the high degree of singularities arising in discrete illposed problems. We present numerical results on test problems and an application of the method to a practical problem with real data.
LCurve Curvature Bounds Via Lanczos Bidiagonalization
"... The Lcurve is often applied to determine a suitable value of the regularization parameter when solving illconditioned linear systems of equations with a righthand side contaminated by errors of unknown norm. The location of the vertex of the Lcurve typically yields a suitable value of the regula ..."
Abstract

Cited by 12 (7 self)
 Add to MetaCart
The Lcurve is often applied to determine a suitable value of the regularization parameter when solving illconditioned linear systems of equations with a righthand side contaminated by errors of unknown norm. The location of the vertex of the Lcurve typically yields a suitable value of the regularization parameter. However, the computation of the Lcurve and of its curvature is quite costly for large problems; the determination of a point on the Lcurve requires that both the norm of the regularized approximate solution and the norm of the corresponding residual vector be available. Recently, the Lribbon, which contains the Lcurve in its interior, has been shown to be suitable for the determination of the regularization parameter for largescale problems. In this paper we describe how techniques similar to those employed for the computation of the Lribbon can be used to compute a "curvatureribbon," which contains the graph of the curvature of the Lcurve. Both curvature and Lribbon can be computed fairly inexpensively by partial Lanczos bidiagonalization of the matrix of the given linear system of equations. A suitable value of the regularization parameter is then determined from these ribbons, and we show that an associated approximate solution of the linear system can be computed with little additional work.
An interiorpoint trustregionbased method for largescale nonnegative regularization
 Inverse Problems
, 2002
"... Abstract We present a new method for solving largescale quadratic problems with quadratic and nonnegativity constraints. Such problems arise for example in the regularization of illposed problems in image restoration where, in addition, some of the matrices involved are very illconditioned. The n ..."
Abstract

Cited by 11 (1 self)
 Add to MetaCart
Abstract We present a new method for solving largescale quadratic problems with quadratic and nonnegativity constraints. Such problems arise for example in the regularization of illposed problems in image restoration where, in addition, some of the matrices involved are very illconditioned. The new method uses recently developed techniques for the largescale trustregion subproblem.
Kronecker Product and SVD Approximations in Image Restoration
 LINEAR ALGEBRA APPL
, 1998
"... Image restoration applications often result in illposed least squares problems involving large, structured matrices. One approach used extensively is to restore the image in the frequency domain, thus providing fast algorithms using ffts. This is equivalent to using a circulant approximation to ..."
Abstract

Cited by 10 (5 self)
 Add to MetaCart
Image restoration applications often result in illposed least squares problems involving large, structured matrices. One approach used extensively is to restore the image in the frequency domain, thus providing fast algorithms using ffts. This is equivalent to using a circulant approximation to a given matrix. Iterative methods may also be used effectively by exploiting the structure of the matrix. While iterative schemes are more expensive than fftbased methods, it has been demonstrated that they are capable of providing better restorations. As an alternative, we propose an approximate singular value decomposition, which can be used in a variety of applications. Used as a direct method, the computed restorations are comparable to iterative methods but are computationally less expensive. In addition, the approximate svd may be used with the generalized cross validation method to choose regularization parameters. It is also demonstrated that the approximate svd can be an ef...
Tikhonov regularization with a solution constraint
 SIAM J. Sci. Comput
"... Abstract. Many numerical methods for the solution of linear illposed problems apply Tikhonov regularization. This paper presents a modification of a numerical method proposed by Golub and von Matt for quadratically constrained leastsquares problems and applies it to Tikhonov regularization of larg ..."
Abstract

Cited by 10 (4 self)
 Add to MetaCart
Abstract. Many numerical methods for the solution of linear illposed problems apply Tikhonov regularization. This paper presents a modification of a numerical method proposed by Golub and von Matt for quadratically constrained leastsquares problems and applies it to Tikhonov regularization of largescale linear discrete illposed problems. The method is based on partial Lanczos bidiagonalization and Gauss quadrature. Computed examples illustrating its performance are presented.