Results 1  10
of
10
A PrimalDual Potential Reduction Method for Problems Involving Matrix Inequalities
 in Protocol Testing and Its Complexity&quot;, Information Processing Letters Vol.40
, 1995
"... We describe a potential reduction method for convex optimization problems involving matrix inequalities. The method is based on the theory developed by Nesterov and Nemirovsky and generalizes Gonzaga and Todd's method for linear programming. A worstcase analysis shows that the number of iterat ..."
Abstract

Cited by 92 (21 self)
 Add to MetaCart
We describe a potential reduction method for convex optimization problems involving matrix inequalities. The method is based on the theory developed by Nesterov and Nemirovsky and generalizes Gonzaga and Todd's method for linear programming. A worstcase analysis shows that the number of iterations grows as the square root of the problem size, but in practice it appears to grow more slowly. As in other interiorpoint methods the overall computational effort is therefore dominated by the leastsquares system that must be solved in each iteration. A type of conjugategradient algorithm can be used for this purpose, which results in important savings for two reasons. First, it allows us to take advantage of the special structure the problems often have (e.g., Lyapunov or algebraic Riccati inequalities). Second, we show that the polynomial bound on the number of iterations remains valid even if the conjugategradient algorithm is not run until completion, which in practice can greatly reduce the computational effort per iteration.
Recursive Blocked Algorithms for Solving Triangular Systems  Part II: TwoSided and Generalized Sylvester and Lyapunov Matrix Equations
 ACM Trans. Math. Software
, 2002
"... We continue our study of highperformance algorithms for solving triangular matrix equations. They appear naturally in different condition estimation problems for matrix equations and various eigenspace computations, and as reduced systems in standard algorithms. Building on our successful recursive ..."
Abstract

Cited by 49 (9 self)
 Add to MetaCart
We continue our study of highperformance algorithms for solving triangular matrix equations. They appear naturally in different condition estimation problems for matrix equations and various eigenspace computations, and as reduced systems in standard algorithms. Building on our successful recursive approach applied to onesided matrix equations (Part I), we now present novel recursive blocked algorithms for twosided matrix equations, which include matrix product terms such as AX BT. Examples are the discretetime standard and generalized Sylvester and Lyapunov equations. The means for achieving high performance is the recursive variable blocking, which has the potential of matching the memory hierarchies of today’s highperformance computing systems, and level3 computations which mainly are performed as GEMM operations. Different implementation issues are discussed, including the design of efficient new algorithms for twosided matrix products. We present uniprocessor and SMP parallel performance results of recursive blocked algorithms and routines in the stateoftheart SLICOT library. Although our recursive algorithms with optimized kernels for the twosided matrix equations perform more operations, the performance improvements are remarkable, including 10fold speedups or more, compared to standard algorithms.
Fast computation of graph kernels
 In
, 2007
"... Using extensions of linear algebra concepts to Reproducing Kernel Hilbert Spaces (RKHS), we define a unifying framework for random walk kernels on graphs. Reduction to a Sylvester equation allows us to compute many of these kernels in O(n 3) worstcase time. This includes kernels whose previous wors ..."
Abstract

Cited by 29 (6 self)
 Add to MetaCart
(Show Context)
Using extensions of linear algebra concepts to Reproducing Kernel Hilbert Spaces (RKHS), we define a unifying framework for random walk kernels on graphs. Reduction to a Sylvester equation allows us to compute many of these kernels in O(n 3) worstcase time. This includes kernels whose previous worstcase time complexity was O(n 6), such as the geometric kernels of Gärtner et al. [1] and the marginal graph kernels of Kashima et al. [2]. Our algebra in RKHS allow us to exploit sparsity in directed and undirected graphs more effectively than previous methods, yielding subcubic computational complexity when combined with conjugate gradient solvers or fixedpoint iterations. Experiments on graphs from bioinformatics and other application domains show that our algorithms are often more than 1000 times faster than existing approaches. 1
An Exact Line Search Method for Solving Generalized ContinuousTime Algebraic Riccati Equations
 IEEE Trans. Automat. Control
, 1998
"... We present a Newtonlike method for solving algebraic Riccati equations that uses exact line search to improve the sometimes erratic convergence behavior of Newton's method. It avoids the problem of a disastrously large first step and accelerates convergence when Newton steps are too small or ..."
Abstract

Cited by 26 (11 self)
 Add to MetaCart
(Show Context)
We present a Newtonlike method for solving algebraic Riccati equations that uses exact line search to improve the sometimes erratic convergence behavior of Newton's method. It avoids the problem of a disastrously large first step and accelerates convergence when Newton steps are too small or too long. The additional work to perform the line search is small relative to the work needed to calculate the Newton step. 1 Introduction We study the generalized continuoustime algebraic Riccati equation (CARE) 0 = R(X) = C T QC +A T XE + E T XA \Gamma (B T XE + S T C) T R \Gamma1 (B T XE + S T C) (1) Here A; E; X 2 IR n\Thetan , B 2 IR n\Thetam , R = R T 2 IR m\Thetam , Q = Q T 2 IR p\Thetap , C 2 IR p\Thetan , and S 2 IR p\Thetam . We will assume that E is nonsingular, Q \Gamma SR \Gamma1 S T 0, and R ? 0 where M ? 0 (M 0) denotes positive (semi) definite matrices M . In principle, by inverting E, (1) may be reduced to the case E = I . This is conve...
Solving a Quadratic Matrix Equation by Newton’s Method with Exact Line Searches, Numerical Analysis Report 339
 Manchester Centre for Computational Mathematics
, 1999
"... with exact line searches ..."
(Show Context)
Minimal Degree Coprime Factorization of Rational Matrices
, 1999
"... Given a rational matrix G with complex coe#cients and a domain # in the closed complex plane, both arbitrary, we develop a complete theory of coprime factorizations of G over #, with denominators of McMillan degree as small as possible. The main tool is a general pole displacement theorem which give ..."
Abstract

Cited by 9 (7 self)
 Add to MetaCart
Given a rational matrix G with complex coe#cients and a domain # in the closed complex plane, both arbitrary, we develop a complete theory of coprime factorizations of G over #, with denominators of McMillan degree as small as possible. The main tool is a general pole displacement theorem which gives conditions for an invertible rational matrix to dislocate by multiplication a part of the poles of G. We apply this result to obtain the parametrized class of all coprime factorizations over # with denominators of minimal McMillan degree n b the number of poles of G outside #. Specific choices of the parameters and of # allow us to determine coprime factorizations, as for instance, with polynomial, proper, or stable factors. Further, we consider the case in which the denominator has a certain symmetry, namely it is J allpass with respect either to the imaginary axis or to the unit circle. We give necessary and su#cient solvability conditions for the problem of coprime factorization with J allpass denominator of McMillan degree n b and, when a solution exists, we give a construction of the class of coprime factors. When no such solution exists, we discuss the existence of, and give solutions to, coprime factorizations with J allpass denominators of minimal McMillan degree (>n b ). All the developments are carried out in terms of descriptor realizations associated with rational matrices, leading to explicit and computationally e#cient formulas.
Numerical aspects of spatiotemporal current density reconstruction from EEG/MEGdata
 IEEE Trans. Med. Imaging
"... Abstract—The determination of the sources of electric activity inside the brain from electric and magnetic measurements on the surface of the head is known to be an illposed problem. In this paper, a new algorithm which takes temporal a priori information modeled by the smooth activation modell int ..."
Abstract

Cited by 7 (1 self)
 Add to MetaCart
(Show Context)
Abstract—The determination of the sources of electric activity inside the brain from electric and magnetic measurements on the surface of the head is known to be an illposed problem. In this paper, a new algorithm which takes temporal a priori information modeled by the smooth activation modell into account is described and compared with existing algorithms such as Tikhonov–Phillips. Index Terms—Electronencephalography, inverse source reconstruction, magnetoencephalography, spatialtemporal current density reconstruction. I.
Applied Numerical Mathematics 50 (2004) 395–407 An error estimate for matrix equations
, 2004
"... www.elsevier.com/locate/apnum This paper proposes a new method for estimating the error in the solution of matrix equations. The estimate is based on the adjoint method in combination with small sample statistical theory. It can be implemented simply and is inexpensive to compute. Numerical examples ..."
Abstract
 Add to MetaCart
(Show Context)
www.elsevier.com/locate/apnum This paper proposes a new method for estimating the error in the solution of matrix equations. The estimate is based on the adjoint method in combination with small sample statistical theory. It can be implemented simply and is inexpensive to compute. Numerical examples are presented which illustrate the power and effectiveness of the new method. © 2004 IMACS. Published by Elsevier B.V. All rights reserved.
A Complete Bibliography of ACM Transactions on Graphics
, 2013
"... Version 1.99 Title word crossreference ..."
(Show Context)