Results 1 
7 of
7
A PrimalDual Potential Reduction Method for Problems Involving Matrix Inequalities
 in Protocol Testing and Its Complexity", Information Processing Letters Vol.40
, 1995
"... We describe a potential reduction method for convex optimization problems involving matrix inequalities. The method is based on the theory developed by Nesterov and Nemirovsky and generalizes Gonzaga and Todd's method for linear programming. A worstcase analysis shows that the number of iterations ..."
Abstract

Cited by 86 (21 self)
 Add to MetaCart
We describe a potential reduction method for convex optimization problems involving matrix inequalities. The method is based on the theory developed by Nesterov and Nemirovsky and generalizes Gonzaga and Todd's method for linear programming. A worstcase analysis shows that the number of iterations grows as the square root of the problem size, but in practice it appears to grow more slowly. As in other interiorpoint methods the overall computational effort is therefore dominated by the leastsquares system that must be solved in each iteration. A type of conjugategradient algorithm can be used for this purpose, which results in important savings for two reasons. First, it allows us to take advantage of the special structure the problems often have (e.g., Lyapunov or algebraic Riccati inequalities). Second, we show that the polynomial bound on the number of iterations remains valid even if the conjugategradient algorithm is not run until completion, which in practice can greatly reduce the computational effort per iteration.
Recursive Blocked Algorithms for Solving Triangular Systems  Part II: TwoSided and Generalized Sylvester and Lyapunov Matrix Equations
 ACM Trans. Math. Software
, 2002
"... We continue our study of highperformance algorithms for solving triangular matrix equations. They appear naturally in different condition estimation problems for matrix equations and various eigenspace computations, and as reduced systems in standard algorithms. Building on our successful recursive ..."
Abstract

Cited by 47 (9 self)
 Add to MetaCart
We continue our study of highperformance algorithms for solving triangular matrix equations. They appear naturally in different condition estimation problems for matrix equations and various eigenspace computations, and as reduced systems in standard algorithms. Building on our successful recursive approach applied to onesided matrix equations (Part I), we now present novel recursive blocked algorithms for twosided matrix equations, which include matrix product terms such as AX BT. Examples are the discretetime standard and generalized Sylvester and Lyapunov equations. The means for achieving high performance is the recursive variable blocking, which has the potential of matching the memory hierarchies of today’s highperformance computing systems, and level3 computations which mainly are performed as GEMM operations. Different implementation issues are discussed, including the design of efficient new algorithms for twosided matrix products. We present uniprocessor and SMP parallel performance results of recursive blocked algorithms and routines in the stateoftheart SLICOT library. Although our recursive algorithms with optimized kernels for the twosided matrix equations perform more operations, the performance improvements are remarkable, including 10fold speedups or more, compared to standard algorithms.
An Exact Line Search Method for Solving Generalized ContinuousTime Algebraic Riccati Equations
 IEEE Trans. Automat. Control
, 1998
"... We present a Newtonlike method for solving algebraic Riccati equations that uses exact line search to improve the sometimes erratic convergence behavior of Newton's method. It avoids the problem of a disastrously large first step and accelerates convergence when Newton steps are too small or too l ..."
Abstract

Cited by 23 (11 self)
 Add to MetaCart
We present a Newtonlike method for solving algebraic Riccati equations that uses exact line search to improve the sometimes erratic convergence behavior of Newton's method. It avoids the problem of a disastrously large first step and accelerates convergence when Newton steps are too small or too long. The additional work to perform the line search is small relative to the work needed to calculate the Newton step. 1 Introduction We study the generalized continuoustime algebraic Riccati equation (CARE) 0 = R(X) = C T QC +A T XE + E T XA \Gamma (B T XE + S T C) T R \Gamma1 (B T XE + S T C) (1) Here A; E; X 2 IR n\Thetan , B 2 IR n\Thetam , R = R T 2 IR m\Thetam , Q = Q T 2 IR p\Thetap , C 2 IR p\Thetan , and S 2 IR p\Thetam . We will assume that E is nonsingular, Q \Gamma SR \Gamma1 S T 0, and R ? 0 where M ? 0 (M 0) denotes positive (semi) definite matrices M . In principle, by inverting E, (1) may be reduced to the case E = I . This is conve...
Solving a Quadratic Matrix Equation by Newton’s Method with Exact Line Searches, Numerical Analysis Report 339
 Manchester Centre for Computational Mathematics
, 1999
"... with exact line searches ..."
Minimal Degree Coprime Factorization of Rational Matrices
, 1999
"... Given a rational matrix G with complex coe#cients and a domain # in the closed complex plane, both arbitrary, we develop a complete theory of coprime factorizations of G over #, with denominators of McMillan degree as small as possible. The main tool is a general pole displacement theorem which give ..."
Abstract

Cited by 8 (6 self)
 Add to MetaCart
Given a rational matrix G with complex coe#cients and a domain # in the closed complex plane, both arbitrary, we develop a complete theory of coprime factorizations of G over #, with denominators of McMillan degree as small as possible. The main tool is a general pole displacement theorem which gives conditions for an invertible rational matrix to dislocate by multiplication a part of the poles of G. We apply this result to obtain the parametrized class of all coprime factorizations over # with denominators of minimal McMillan degree n b the number of poles of G outside #. Specific choices of the parameters and of # allow us to determine coprime factorizations, as for instance, with polynomial, proper, or stable factors. Further, we consider the case in which the denominator has a certain symmetry, namely it is J allpass with respect either to the imaginary axis or to the unit circle. We give necessary and su#cient solvability conditions for the problem of coprime factorization with J allpass denominator of McMillan degree n b and, when a solution exists, we give a construction of the class of coprime factors. When no such solution exists, we discuss the existence of, and give solutions to, coprime factorizations with J allpass denominators of minimal McMillan degree (>n b ). All the developments are carried out in terms of descriptor realizations associated with rational matrices, leading to explicit and computationally e#cient formulas.
Numerical Aspects of SpatioTemporal Current Density Reconstruction from EEG/MEGData
 IEEE Trans. Med. Imag
, 2001
"... The determination of the sources of electric activity inside the brain from electric and magnetic measurements on the surface of the head is known to be an illposed problem. ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
The determination of the sources of electric activity inside the brain from electric and magnetic measurements on the surface of the head is known to be an illposed problem.
New Perturbation Bounds For Sylvester Equations
"... The sensitivity of Sylvester matrix equations relative to perturbations in the coefficientmatricesisstudied. New local perturbations bounds are obtained. Keywords. Perturbation analysis, Sylvester equations. 1 Introduction In this paper we study the sensitivityofSylvester matrix equations (SME) ar ..."
Abstract
 Add to MetaCart
The sensitivity of Sylvester matrix equations relative to perturbations in the coefficientmatricesisstudied. New local perturbations bounds are obtained. Keywords. Perturbation analysis, Sylvester equations. 1 Introduction In this paper we study the sensitivityofSylvester matrix equations (SME) arising in linear systems theory. A new local perturbation bound for SME is obtained, whichisa nonlinear, first order homogeneous and tighter than the local bounds based on condition numbers [1]  [5]. The following notations are used later on: R m\Thetan  the space of real m \Theta n matrices# I n  the unit n \Theta n matrix # A ? =[a ji ]  the transpose of the matrix A =[a ij ]# vec(A) 2R mn  the columnwise vector representation of the matrix A 2R m\Thetan # A\Omega B =[a ij B]  the Kronecker product of the matrices A and B# k\Deltak 2  the spectral (or 2) norm in R m\Thetan # k:k F  the Frobenius (or F) norm in R m\Thetan . The notation `:=' stands for `equal ...