Results 1  10
of
55
Krylov subspace methods for linear systems with tensor product structure
 SIAM J. Matrix Anal. Appl
"... Abstract. The numerical solution of linear systems with certain tensor product structures is considered. Such structures arise, for example, from the finite element discretization of a linear PDE on a ddimensional hypercube. Linear systems with tensor product structure can be regarded as linear mat ..."
Abstract

Cited by 31 (7 self)
 Add to MetaCart
(Show Context)
Abstract. The numerical solution of linear systems with certain tensor product structures is considered. Such structures arise, for example, from the finite element discretization of a linear PDE on a ddimensional hypercube. Linear systems with tensor product structure can be regarded as linear matrix equations for d = 2 and appear to be their most natural extension for d> 2. A standard Krylov subspace method applied to such a linear system suffers from the curse of dimensionality and has a computational cost that grows exponentially with d. The key to breaking the curse is to note that the solution can often be very well approximated by a vector of low tensor rank. We propose and analyse a new class of methods, so called tensor Krylov subspace methods, which exploit this fact and attain a computational cost that grows linearly with d.
A new investigation of the extended Krylov subspace method for matrix function evaluations
 Numer. Linear Algebra Appl
, 2010
"... Abstract. For large square matrices A and functions f, the numerical approximation of the action of f(A) to a vector v has received considerable attention in the last two decades. In this paper we investigate the Extended Krylov subspace method, a technique that was recently proposed to approximate ..."
Abstract

Cited by 29 (4 self)
 Add to MetaCart
(Show Context)
Abstract. For large square matrices A and functions f, the numerical approximation of the action of f(A) to a vector v has received considerable attention in the last two decades. In this paper we investigate the Extended Krylov subspace method, a technique that was recently proposed to approximate f(A)v for A symmetric. We provide a new theoretical analysis of the method, which improves the original result for A symmetric, and gives a new estimate for A nonsymmetric. Numerical experiments confirm that the new error estimates correctly capture the linear asymptotic convergence rate of the approximation. By using recent algorithmic improvements, we also show that the method is computationally competitive with respect to other enhancement techniques.
A Riemannian optimization approach for computing lowrank solutions of Lyapunov equations
, 2009
"... We propose a new framework based on optimization on manifolds to approximate the solution of a Lyapunov matrix equation by a lowrank matrix. The method minimizes the error on the Riemannian manifold of symmetric positive semidefinite matrices of fixed rank. We detail how objects from differential ..."
Abstract

Cited by 26 (4 self)
 Add to MetaCart
(Show Context)
We propose a new framework based on optimization on manifolds to approximate the solution of a Lyapunov matrix equation by a lowrank matrix. The method minimizes the error on the Riemannian manifold of symmetric positive semidefinite matrices of fixed rank. We detail how objects from differential geometry, like the Riemannian gradient and Hessian, can be efficiently computed for this manifold. As minimization algorithm we use the Riemannian TrustRegion method of [Found. Comput. Math., 7 (2007), pp. 303–330] based on a secondorder model of the objective function on the manifold. Together with an efficient preconditioner this method can find lowrank solutions with very little memory. We illustrate our results with numerical examples.
Numerical solution of largescale Lyapunov equations, Riccati equations, and linearquadratic optimal control problems
 NUMER. LINEAR ALGEBRA APPL. 2008; 15:1–23
, 2008
"... We study largescale, continuoustime linear timeinvariant control systems with a sparse or structured state matrix and a relatively small number of inputs and outputs. The main contributions of this paper are numerical algorithms for the solution of large algebraic Lyapunov and Riccati equations a ..."
Abstract

Cited by 18 (2 self)
 Add to MetaCart
We study largescale, continuoustime linear timeinvariant control systems with a sparse or structured state matrix and a relatively small number of inputs and outputs. The main contributions of this paper are numerical algorithms for the solution of large algebraic Lyapunov and Riccati equations and linearquadratic optimal control problems, which arise from such systems. First, we review an ADI based method to compute approximate lowrank Cholesky factors of the solution matrix of largescale Lyapunov equations, and we propose a refined version of this algorithm. Second, a combination of this method with a variant of Newton’s method (in this context also called Kleinman iteration) results in an algorithm for the solution of largescale Riccati equations. Third, we describe an implicit version of this algorithm for the solution of linearquadratic optimal control problems which computes the feedback directly without solving the underlying algebraic Riccati equation explicitly. Our algorithms are efficient with respect to both memory and computation. In particular, they can be applied to problems of very large scale, where square, dense matrices of the system order cannot be stored in the computer memory. We study the performance of our algorithms in numerical experiments.
A GalerkinNewtonADI Method for Solving LargeScale Algebraic Riccati Equations
, 2010
"... The alternating directions implicit (ADI) iteration has been proven to be a highly efficient method for solving stable large scale Lyapunov equations when applied in order to compute low rank factors of the solution. Employing NewtonADI or NewtonKleinmanADI methods for solving algebraic Riccati ..."
Abstract

Cited by 15 (11 self)
 Add to MetaCart
The alternating directions implicit (ADI) iteration has been proven to be a highly efficient method for solving stable large scale Lyapunov equations when applied in order to compute low rank factors of the solution. Employing NewtonADI or NewtonKleinmanADI methods for solving algebraic Riccati equations (AREs), one has to solve a stable large scale Lyapunov equation in every Newton step. It has been shown, that the sparse plus low rank structure of the Lyapunov equation in the Newton step can easily be incorporated in the low rank ADI iteration. Still, the ADI iterations convergence speed is strongly depending on certain shift parameters. In this paper we will discuss a hybrid GalerkinADI approach that can drastically accelerate the ADI iteration when good shifts are unknown or hard to compute. The same ideas can be applied to accelerate the inexact Newton iteration resulting from the approximative/iterative solution of the Lyapunov equations.
Numerical Solution of Large and Sparse Continuous Time Algebraic Matrix Riccati and Lyapunov Equations: A State of the Art Survey
, 2013
"... ..."
(Show Context)
A Survey of Model Reduction Methods for Parametric Systems
, 2013
"... Numerical simulation of largescale dynamical systems plays a fundamental role in studying a wide range of complex physical phenomena; however, the inherent largescale nature of the models leads to unmanageable demands on computational resources. Model reduction aims to reduce this computational bu ..."
Abstract

Cited by 12 (4 self)
 Add to MetaCart
Numerical simulation of largescale dynamical systems plays a fundamental role in studying a wide range of complex physical phenomena; however, the inherent largescale nature of the models leads to unmanageable demands on computational resources. Model reduction aims to reduce this computational burden by generating reduced models that are faster and cheaper to simulate, yet accurately represent the original largescale system behavior. Model reduction of linear, nonparametric dynamical systems has reached a considerable level of maturity, as reflected by several survey papers and books. However, parametric model reduction has emerged only more recently as an important and vibrant research area, with several recent advances making a survey paper timely. Thus, this paper aims to provide a resource that draws together recent contributions in different communities to survey stateoftheart in parametric model reduction methods. Parametric model reduction targets the broad class of problems for which the equations governing the system behavior depend on a set of parameters. Examples include parameterized partial differential equations and largescale systems of parameterized ordinary differential