Results 1 
9 of
9
A PrimalDual Potential Reduction Method for Problems Involving Matrix Inequalities
 in Protocol Testing and Its Complexity", Information Processing Letters Vol.40
, 1995
"... We describe a potential reduction method for convex optimization problems involving matrix inequalities. The method is based on the theory developed by Nesterov and Nemirovsky and generalizes Gonzaga and Todd's method for linear programming. A worstcase analysis shows that the number of iterations ..."
Abstract

Cited by 87 (21 self)
 Add to MetaCart
We describe a potential reduction method for convex optimization problems involving matrix inequalities. The method is based on the theory developed by Nesterov and Nemirovsky and generalizes Gonzaga and Todd's method for linear programming. A worstcase analysis shows that the number of iterations grows as the square root of the problem size, but in practice it appears to grow more slowly. As in other interiorpoint methods the overall computational effort is therefore dominated by the leastsquares system that must be solved in each iteration. A type of conjugategradient algorithm can be used for this purpose, which results in important savings for two reasons. First, it allows us to take advantage of the special structure the problems often have (e.g., Lyapunov or algebraic Riccati inequalities). Second, we show that the polynomial bound on the number of iterations remains valid even if the conjugategradient algorithm is not run until completion, which in practice can greatly reduce the computational effort per iteration.
On the Mechanics of Forming and Estimating Dynamic Linear Economies
"... This paper catalogues formulas that are useful for estimating dynamic linear economic models. We describe algorithms for computing equilibria of an economic model and for recursively computing a Gaussian likelihood function and its gradient with respect to parameters. We apply these methods to sever ..."
Abstract

Cited by 51 (14 self)
 Add to MetaCart
This paper catalogues formulas that are useful for estimating dynamic linear economic models. We describe algorithms for computing equilibria of an economic model and for recursively computing a Gaussian likelihood function and its gradient with respect to parameters. We apply these methods to several example economies.
Recursive Blocked Algorithms for Solving Triangular Systems  Part II: TwoSided and Generalized Sylvester and Lyapunov Matrix Equations
 ACM Trans. Math. Software
, 2002
"... We continue our study of highperformance algorithms for solving triangular matrix equations. They appear naturally in different condition estimation problems for matrix equations and various eigenspace computations, and as reduced systems in standard algorithms. Building on our successful recursive ..."
Abstract

Cited by 47 (9 self)
 Add to MetaCart
We continue our study of highperformance algorithms for solving triangular matrix equations. They appear naturally in different condition estimation problems for matrix equations and various eigenspace computations, and as reduced systems in standard algorithms. Building on our successful recursive approach applied to onesided matrix equations (Part I), we now present novel recursive blocked algorithms for twosided matrix equations, which include matrix product terms such as AX BT. Examples are the discretetime standard and generalized Sylvester and Lyapunov equations. The means for achieving high performance is the recursive variable blocking, which has the potential of matching the memory hierarchies of today’s highperformance computing systems, and level3 computations which mainly are performed as GEMM operations. Different implementation issues are discussed, including the design of efficient new algorithms for twosided matrix products. We present uniprocessor and SMP parallel performance results of recursive blocked algorithms and routines in the stateoftheart SLICOT library. Although our recursive algorithms with optimized kernels for the twosided matrix equations perform more operations, the performance improvements are remarkable, including 10fold speedups or more, compared to standard algorithms.
Approximation with Kronecker Products
 Linear Algebra for Large Scale and Real Time Applications
, 1993
"... Let A be an mbyn matrix with m = m1m2 and n = n1n2 . We consider the problem of finding B 2 IR m 1 \Thetan 1 and C 2 IR m 2 \Thetan 2 so that k A \Gamma B\Omega C k F is minimized. This problem can be solved by computing the largest singular value and associated singular vectors of a permute ..."
Abstract

Cited by 23 (1 self)
 Add to MetaCart
Let A be an mbyn matrix with m = m1m2 and n = n1n2 . We consider the problem of finding B 2 IR m 1 \Thetan 1 and C 2 IR m 2 \Thetan 2 so that k A \Gamma B\Omega C k F is minimized. This problem can be solved by computing the largest singular value and associated singular vectors of a permuted version of A. If A is symmetric, definite, nonnegative, or banded, then the minimizing B and C are similarly structured. The idea of using Kronecker product preconditioners is briefly discussed. 1 Introduction Suppose A 2 IR m\Thetan with m = m 1 m 2 and n = n 1 n 2 . This paper is about the minimization of OE A (B; C) = k A \Gamma B\Omega C k 2 F where B 2 IR m1 \Thetan 1 , C 2 IR m2 \Thetan 2 , and "\Omega " denotes the Kronecker product. Our interest in this problem stems from preliminary experience with Kronecker product preconditioners in the conjugate gradient setting. Suppose A 2 IR n\Thetan with n = n 1 n 2 and that M is the preconditioner. For this solution process...
The Shifted Hessenberg System Solve Computation
 Cornell Theory Center
, 1995
"... We present methods for improving data reuse in solving sequences of linear systems that are Hessenberg matrices shifted by a sequence of scalars times the identity or a triangular matrix. The methods take into consideration the robust handling of overflow and include new condition estimation strateg ..."
Abstract

Cited by 6 (3 self)
 Add to MetaCart
We present methods for improving data reuse in solving sequences of linear systems that are Hessenberg matrices shifted by a sequence of scalars times the identity or a triangular matrix. The methods take into consideration the robust handling of overflow and include new condition estimation strategies. We provide timings on both scalar and vector machines to demonstrate both the diversity and importance of these ideas. We also
Direct methods for matrix Sylvester and Lyapunov equations
 J. Appl. Math
, 2002
"... In this paper we revisit the two standard dense methods for matrix Sylvester and Lyapunov equations: the BartelsStewart method for A1X + XA2 + D = 0 and Hammarling’s method for AX + XA T + BB T = 0 with A stable. We construct three schemes for solving the unitarily reduced quasitriangular systems. ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
In this paper we revisit the two standard dense methods for matrix Sylvester and Lyapunov equations: the BartelsStewart method for A1X + XA2 + D = 0 and Hammarling’s method for AX + XA T + BB T = 0 with A stable. We construct three schemes for solving the unitarily reduced quasitriangular systems. We also construct a new rank1 updating scheme in Hammarling’s method. This new scheme is able to accommodate a B with more columns than rows as well as the usual case of a B with more rows than columns, while Hammarling’s original scheme needs to separate these two cases. We compared all of our schemes with the matlab Sylvester and Lyapunov solver lyap.m, the results show that our schemes are much more efficient. We also compare our schemes with the Lyapunov solver sllyap in the currently possibly the most efficient control library package–SLICOT; numerical results show our scheme to be competitive. Keywords: BartelsStewart algorithm; Hammarling’s algorithm; triangular matrix equation; real arithmetic; columnwise elimination; rank1 update AMS subject classifications: 15A24, 15A06, 65F30, 65F05 1
The Synchronization Problem
 in Protocol Testing and its Complexity”, Inf. Proc. Letters, Vol.40
, 1991
"... primaldual potential reduction method for ..."
Notes on Linear Control Theory
"... this paper use routines from the FORTRAN packages LAPACK, LINPACK and RICPACK. All of these packages can be obtained by anonymous ftp from netlib.att.com and various mirrors. MATLAB is a commercial matrix algebra package available from The MathWorks, Inc. All of our FORTRAN routines are implemented ..."
Abstract
 Add to MetaCart
this paper use routines from the FORTRAN packages LAPACK, LINPACK and RICPACK. All of these packages can be obtained by anonymous ftp from netlib.att.com and various mirrors. MATLAB is a commercial matrix algebra package available from The MathWorks, Inc. All of our FORTRAN routines are implemented as MATLAB MEXfiles.
Linear Algebra and its Applications 429 (2008) 2293–2314
, 2007
"... Available online at www.sciencedirect.com ..."