Results 1  10
of
41
Fast Nonsymmetric Iterations and Preconditioning for NavierStokes Equations
 SIAM J. Sci. Comput
, 1994
"... Discretization and linearization of the steadystate NavierStokes equations gives rise to a nonsymmetric indefinite linear system of equations. In this paper, we introduce preconditioning techniques for such systems with the property that the eigenvalues of the preconditioned matrices are bounded i ..."
Abstract

Cited by 74 (10 self)
 Add to MetaCart
(Show Context)
Discretization and linearization of the steadystate NavierStokes equations gives rise to a nonsymmetric indefinite linear system of equations. In this paper, we introduce preconditioning techniques for such systems with the property that the eigenvalues of the preconditioned matrices are bounded independently of the mesh size used in the discretization. We confirm and supplement these analytic results with a series of numerical experiments indicating that Krylov subspace iterative methods for nonsymmetric systems display rates of convergence that are independent of the mesh parameter. In addition, we show that preconditioning costs can be kept small by using iterative methods for some intermediate steps performed by the preconditioner. * This work was supported by the U. S. Army Research Office under grant DAAL0392G0016 and the U. S. National Science Foundation under grant ASC8958544 at the University of Maryland, and the Science and Engineering Research Council of Great Britain V...
Computing An Eigenvector With Inverse Iteration
 SIAM REVIEW
, 1997
"... The purpose of this paper is twofold: to analyse the behaviour of inverse iteration for computing a single eigenvector of a complex, square matrix; and to review Jim Wilkinson's contributions to the development of the method. In the process we derive several new results regarding the converge ..."
Abstract

Cited by 55 (1 self)
 Add to MetaCart
(Show Context)
The purpose of this paper is twofold: to analyse the behaviour of inverse iteration for computing a single eigenvector of a complex, square matrix; and to review Jim Wilkinson's contributions to the development of the method. In the process we derive several new results regarding the convergence of inverse iteration in exact arithmetic. In the case of normal matrices we show that residual norms decrease strictly monotonically. For eighty percent of the starting vectors a single iteration is enough. In the case of nonnormal matrices, we show that the iterates converge asymptotically to an invariant subspace. However the residual norms may not converge. The growth in residual norms from one iteration to the next can exceed the departure of the matrix from normality. We present an example where the residual growth is exponential in the departure of the matrix from normality. We also explain the often significant regress of the residuals after the first iteration: it occurs when the no...
Matrix nearness problems and applications
 Applications of Matrix Theory
, 1989
"... A matrix nearness problem consists of finding, for an arbitrary matrix A, a nearest member of some given class of matrices, where distance is measured in a matrix norm. A survey of nearness problems is given, with particular emphasis on the fundamental properties of symmetry, positive definiteness, ..."
Abstract

Cited by 54 (7 self)
 Add to MetaCart
A matrix nearness problem consists of finding, for an arbitrary matrix A, a nearest member of some given class of matrices, where distance is measured in a matrix norm. A survey of nearness problems is given, with particular emphasis on the fundamental properties of symmetry, positive definiteness, orthogonality, normality, rankdeficiency and instability. Theoretical results and computational methods are described. Applications of nearness problems in areas including control theory, numerical analysis and statistics are outlined.
Expressions And Bounds For The GMRES Residual
 BIT
, 1999
"... . Expressions and bounds are derived for the residual norm in GMRES. It is shown that the minimal residual norm is large as long as the Krylov basis is wellconditioned.For scaled Jordan blocks the minimal residual norm is expressed in terms of eigenvalues and departure from normality.For normal mat ..."
Abstract

Cited by 30 (0 self)
 Add to MetaCart
. Expressions and bounds are derived for the residual norm in GMRES. It is shown that the minimal residual norm is large as long as the Krylov basis is wellconditioned.For scaled Jordan blocks the minimal residual norm is expressed in terms of eigenvalues and departure from normality.For normal matrices the minimal residual norm is expressed in terms of products of relative eigenvalue di#erences. Key words. linear system, Krylov methods, GMRES, MINRES, Vandermonde matrix, eigenvalues, departure from normality AMS subject classi#cation. 15A03, 15A06, 15A09, 15A12, 15A18, 15A60, 65F10, 65F15, 65F20, 65F35. 1. Introduction.. The generalised minimal residual method #GMRES# #31, 36# #and MINRES for Hermitian matrices #30## is an iterative method for solving systems of linear equations Ax = b. The approximate solution in iteration i minimises the twonorm of the residual b , Az over the Krylov space spanfb;Ab;:::;A i,1 bg. The goal of this paper is to express this minimal residual norm...
Is Nonnormality a Serious Difficulty?
"... The departure from normality of a matrix plays an essential role in numerical matrix computations since it rules the spectral instability. But this first consequence of high nonnormality was for long considered by practioners as a mathematical oddity, since such matrices were not often encountered i ..."
Abstract

Cited by 12 (1 self)
 Add to MetaCart
The departure from normality of a matrix plays an essential role in numerical matrix computations since it rules the spectral instability. But this first consequence of high nonnormality was for long considered by practioners as a mathematical oddity, since such matrices were not often encountered in practice. It appears now that more and more matrices, which have a possibly unbounded departure from normality, emerge in the modelling of physical problems at the edge of instability. They challenge many robust numerical codes because of a second and recently exposed consequence of nonnormality : the possible deterioration of the backward stability for algorithms. In this paper, we address the following four questions : i) what is a measure of nonnormality ? ii) where do highly nonnormal matrices come from ? iii) the influence of nonnormality on numerical stability in exact arithmetic, iv) its influence on the reliability of Numerical Software. It has long been known that nonnormal ma...
On The Roots Of The Orthogonal Polynomials And Residual Polynomials Associated With A Conjugate Gradient Method
 Journal of Numerical Linear Algebra
"... In this paper we explore two sets of polynomials, the orthogonal polynomials and the residual polynomials, associated with a preconditioned conjugate gradient iteration for the solution of the linear system Ax = b. In the context of preconditioning by the matrix C, we show that the roots of the o ..."
Abstract

Cited by 10 (3 self)
 Add to MetaCart
(Show Context)
In this paper we explore two sets of polynomials, the orthogonal polynomials and the residual polynomials, associated with a preconditioned conjugate gradient iteration for the solution of the linear system Ax = b. In the context of preconditioning by the matrix C, we show that the roots of the orthogonal polynomials, also known as generalized Ritz values, are the eigenvalues of an orthogonal section of the matrix CA while the roots of the residual polynomials, also known as pseudoRitz values (or roots of kernel polynomials), are the reciprocals of the eigenvalues of an orthogonal section of the matrix (CA) \Gamma1 . When CA is selfadjoint positive definite, this distinction is minimal, but for the indefinite or nonselfadjoint case this distinction becomes important. We use these two sets of roots to form possibly nonconvex regions in the complex plane that describe the spectrum of CA. Key words: orthogonal polynomials, residual polynomials, conjugate gradient method, Ritz values, field of values 1.
The Influence of Large Nonnormality on the Quality of Convergence of Iterative Methods in Linear Algebra
, 1994
"... The departure from normality of a matrix plays an essential role in the numerical matrix computations. The bad numerical behaviour of highly nonnormal matrices has been known for a long time ([14], [25], [5]). But this first effect of high nonnormality i.e. the increase of the spectral instability w ..."
Abstract

Cited by 10 (0 self)
 Add to MetaCart
The departure from normality of a matrix plays an essential role in the numerical matrix computations. The bad numerical behaviour of highly nonnormal matrices has been known for a long time ([14], [25], [5]). But this first effect of high nonnormality i.e. the increase of the spectral instability was considered by practionners as a mathematical oddity, since such matrices were not often encountered in practice. Even the most recent textbooks for engineers on eigenvalue computations, such as [19], do not warn the reader against such a possible difficulty. However, the presentday computers make largescale problems tractable and allow the engineers to elaborate more and more complex and realistic models for physical phenomena. It seems that now, more and more matrices that model physical problems at the edge of instability arise ([16], [18], [9]), which have a possibly unbounded departure from normality, and they challenge many robust numerical codes because of a second  and newly ana...
Eigenvalue estimates for nonnormal matrices and the zeros of random orthogonal polynomials on the unit circle
 J. Approx. Theory
"... Abstract. We prove that for any n × n matrix, A, and z with z  ≥ �A�, we have that �(z − A) −1 � ≤ cot ( π 4n)dist(z, spec(A))−1. We apply this result to the study of random orthogonal polynomials on the unit circle. 1. ..."
Abstract

Cited by 9 (3 self)
 Add to MetaCart
(Show Context)
Abstract. We prove that for any n × n matrix, A, and z with z  ≥ �A�, we have that �(z − A) −1 � ≤ cot ( π 4n)dist(z, spec(A))−1. We apply this result to the study of random orthogonal polynomials on the unit circle. 1.