Results 1  10
of
29
A Schur method for solving algebraic Riccati equations
 IEEE Trans. Auto. Control
, 1979
"... Abstract. An exact line search method has been introduced by Benner and Byers [IEEE Trans. Autom. Control, 43 (1998), pp. 101–107] for solving continuous algebraic Riccati equations. The method is a modification of Newton’s method. A convergence theory is established in that paper for the Newtonlik ..."
Abstract

Cited by 121 (3 self)
 Add to MetaCart
Abstract. An exact line search method has been introduced by Benner and Byers [IEEE Trans. Autom. Control, 43 (1998), pp. 101–107] for solving continuous algebraic Riccati equations. The method is a modification of Newton’s method. A convergence theory is established in that paper for the Newtonlike method under the strong hypothesis of controllability, while the original Newton’s method needs only the weaker hypothesis of stabilizability for its convergence theory. It is conjectured there that the controllability condition can be weakened to the stabilizability condition. In this note we prove that conjecture.
The projected gradient method for least squares matrix approximations with spectral constraints
 SIAM J. Numer. Anal
, 1990
"... Abstract. The problems of computing least squares approximations for various types of real and symmetric matrices subject to spectral constraints share a common structure. This paper describes a general procedure in using the projected gradient method. It is shown that the projected gradient of the ..."
Abstract

Cited by 48 (23 self)
 Add to MetaCart
Abstract. The problems of computing least squares approximations for various types of real and symmetric matrices subject to spectral constraints share a common structure. This paper describes a general procedure in using the projected gradient method. It is shown that the projected gradient of the objective functionon the manifold ofconstraints usuallycanbe formulated explicitly. This gives rise to the construction of a descent flow that can be followed numerically. The explicit form also facilitates the computation of the secondorder optimality conditions. Examples of applications are discussed. With slight modifications, the procedure can be extended to solve least squares problems for general matrices subject to singularvalue constraints. Key words, least squares approximation, projected gradient, spectral constraints, singularvalue constraints AMS(MOS) subject classifications. 65F15, 49D10 1. Introduction. Let S(n) denote the subspace of all symmetric matrices in R"x". Given a matrix A S(n), we define an isospectral surface M(A) ofA by (1) M(A):={X R" " [X Q’AQ, Q O(n)} where O(n) is the collection of all orthogonal matrices in R"". Let represent either
Computing An Eigenvector With Inverse Iteration
 SIAM Review
, 1997
"... . The purpose of this paper is twofold: to analyse the behaviour of inverse iteration for computing a single eigenvector of a complex, square matrix; and to review Jim Wilkinson's contributions to the development of the method. In the process we derive several new results regarding the convergence ..."
Abstract

Cited by 41 (1 self)
 Add to MetaCart
. The purpose of this paper is twofold: to analyse the behaviour of inverse iteration for computing a single eigenvector of a complex, square matrix; and to review Jim Wilkinson's contributions to the development of the method. In the process we derive several new results regarding the convergence of inverse iteration in exact arithmetic. In the case of normal matrices we show that residual norms decrease strictly monotonically. For eighty percent of the starting vectors a single iteration is enough. In the case of nonnormal matrices, we show that the iterates converge asymptotically to an invariant subspace. However the residual norms may not converge. The growth in residual norms from one iteration to the next can exceed the departure of the matrix from normality. We present an example where the residual growth is exponential in the departure of the matrix from normality. We also explain the often significant regress of the residuals after the first iteration: it occurs when the no...
Backward Error and Condition of Structured Linear Systems
 SIMAX
, 1992
"... Reports available from: ..."
Spectral analysis of the transition operator and its applications to smoothness analysis of wavelets
 SIAM J. Matrix. Anal. Appl
, 2001
"... The purpose of this paper is to investigate spectral properties of the transition operator associated to a multivariate vector refinement equation and their applications to the study of smoothness of the corresponding refinable vector of functions. Let Φ = (φ1,..., φr) T be an r × 1 vector of compac ..."
Abstract

Cited by 32 (16 self)
 Add to MetaCart
The purpose of this paper is to investigate spectral properties of the transition operator associated to a multivariate vector refinement equation and their applications to the study of smoothness of the corresponding refinable vector of functions. Let Φ = (φ1,..., φr) T be an r × 1 vector of compactly supported functions in L2(IR s) satisfying the refinement equation Φ = � α ∈ Zs a(α)Φ(M · − α), where M is an expansive integer matrix. We assume that M is isotropic, i.e., M is similar to a diagonal matrix diag(σ1,..., σs) with σ1  = · · · = σs. For µ = (µ1,..., µs) ∈ IN s 0, define. The smoothness of Φ is measured by the critical exponent σ −µ: = σ −µ1
Minimal Residual Method Stronger Than Polynomial Preconditioning
, 1994
"... . This paper compares the convergence behavior of two popular iterative methods for solving systems of linear equations: the sstep restarted minimal residual method (commonly implemented by algorithms such as GMRES(s)), and (s \Gamma 1)degree polynomial preconditioning. It is known that for normal ..."
Abstract

Cited by 22 (1 self)
 Add to MetaCart
. This paper compares the convergence behavior of two popular iterative methods for solving systems of linear equations: the sstep restarted minimal residual method (commonly implemented by algorithms such as GMRES(s)), and (s \Gamma 1)degree polynomial preconditioning. It is known that for normal matrices, and in particular for symmetric positive definite matrices, the convergence bounds for the two methods are the same. In this paper we demonstrate that for matrices unitarily equivalent to an upper triangular Toeplitz matrix, a similar result holds, namely, either both methods converge or both fail to converge. However, we show this result cannot be generalized to all matrices. Specifically, we develop a method, based on convexity properties of the generalized field of values of powers of the iteration matrix, to obtain examples of real matrices for which GMRES(s) converges for every initial vector, but every (s \Gamma 1) degree polynomial preconditioning stagnates or diverges for...
Convergence of Vector Subdivision Schemes and Construction of Biorthogonal Multiple Wavelets
, 1997
"... Let φ = (φ1,..., φr) T be a refinable vector of compactly supported functions in L2(IR). It is shown in this paper that there exists a refinable vector ˜ φ of compactly supported functions in L2(IR) such that ˜ φ is dual to φ if and only if the shifts of φ1,..., φr are linearly independent. This res ..."
Abstract

Cited by 16 (5 self)
 Add to MetaCart
Let φ = (φ1,..., φr) T be a refinable vector of compactly supported functions in L2(IR). It is shown in this paper that there exists a refinable vector ˜ φ of compactly supported functions in L2(IR) such that ˜ φ is dual to φ if and only if the shifts of φ1,..., φr are linearly independent. This result is established on the basis of a complete characterization of the convergence of vector subdivision schemes associated with exponentially decaying masks. As an application of the general theory, two interesting examples of biorthogonal double wavelets are constructed.
Newton’s method for discrete algebraic Riccati equations when the closedloop matrix has eigenvalues on the unit circle
 SIAM J. Matrix Anal. Appl
, 1998
"... Abstract. When Newton’s method is applied to find the maximal symmetric solution of a discrete algebraic Riccati equation (DARE), convergence can be guaranteed under moderate conditions. In particular, the initial guess does not need to be close to the solution. The convergence is quadratic if the F ..."
Abstract

Cited by 12 (5 self)
 Add to MetaCart
Abstract. When Newton’s method is applied to find the maximal symmetric solution of a discrete algebraic Riccati equation (DARE), convergence can be guaranteed under moderate conditions. In particular, the initial guess does not need to be close to the solution. The convergence is quadratic if the Fréchet derivative is invertible at the solution. When the closedloop matrix has eigenvalues on the unit circle, the derivative at the solution is not invertible. The convergence of Newton’s method is shown to be either quadratic or linear with common ratio 1, provided that the eigenvalues on the 2 unit circle are all semisimple. The linear convergence appears to be dominant, and the efficiency of the Newton iteration can be improved significantly by applying a double Newton step at the right time.
Row Straightening via Local Interactions
, 1996
"... A number of agents can arrange themselves equidistantly in a row via a sequence of adjustments, based on a simple "local" interaction. The convergence of the configuration to the desired one is exponentially fast. A similarity is shown between this phenomenon and the dynamics of pulse propagation al ..."
Abstract

Cited by 8 (3 self)
 Add to MetaCart
A number of agents can arrange themselves equidistantly in a row via a sequence of adjustments, based on a simple "local" interaction. The convergence of the configuration to the desired one is exponentially fast. A similarity is shown between this phenomenon and the dynamics of pulse propagation along a distributed RC line, and a conjecture is made concerning the evolution of a similar system with a probabilistic rule of behavior.
Spectral properties of random nonselfadjoint matrices and operators
, 2001
"... We describe some numerical experiments which determine the degree of spectral instability of medium size randomly generated matrices which are far from selfadjoint. The conclusion is that the eigenvalues are likely to be intrinsically uncomputable for similar matrices of a larger size. We also desc ..."
Abstract

Cited by 7 (4 self)
 Add to MetaCart
We describe some numerical experiments which determine the degree of spectral instability of medium size randomly generated matrices which are far from selfadjoint. The conclusion is that the eigenvalues are likely to be intrinsically uncomputable for similar matrices of a larger size. We also describe a stochastic family of bounded operators in infinite dimensions for almost all of which the eigenvectors generate a dense linear subspace, but the eigenvalues do not determine the spectrum. Our results imply that the spectrum of the nonselfadjoint Anderson model changes suddenly as one passes to the infinite volume limit.