Results 1  10
of
68
Theory and Methods
 in Political Science. 2nd Ed. Series: Political Analysis. Houndmills
, 2002
"... secrets to the success of the Rush–Larsen ..."
(Show Context)
COMPUTING THE ACTION OF THE MATRIX EXPONENTIAL, WITH AN APPLICATION TO EXPONENTIAL INTEGRATORS
, 2010
"... A new algorithm is developed for computing etAB, where A is an n × n matrix and B is n×n0 with n0 ≪ n. The algorithm works for any A, its computational cost is dominated by the formation of products of A with n × n0 matrices, and the only input parameter is a backward error tolerance. The algorithm ..."
Abstract

Cited by 31 (9 self)
 Add to MetaCart
(Show Context)
A new algorithm is developed for computing etAB, where A is an n × n matrix and B is n×n0 with n0 ≪ n. The algorithm works for any A, its computational cost is dominated by the formation of products of A with n × n0 matrices, and the only input parameter is a backward error tolerance. The algorithm can return a single matrix etAB or a sequence etkAB on an equally spaced grid of points tk. It uses the scaling part of the scaling and squaring method together with a truncated Taylor series approximation to the exponential. It determines the amount of scaling and the Taylor degree using the recent analysis of AlMohy and Higham [SIAM J. Matrix Anal. Appl. 31 (2009), pp. 970989], which provides sharp truncation error bounds expressed in terms of the quantities ‖Ak‖1/k for a few values of k, where the norms are estimated using a matrix norm estimator. Shifting and balancing are used as preprocessing steps to reduce the cost of the algorithm. Numerical experiments show that the algorithm performs in a numerically stable fashion across a wide range of problems, and analysis of rounding errors and of the conditioning of the problem provides theoretical support. Experimental comparisons with two Krylovbased MATLAB codes show the new algorithm to be sometimes much superior in terms of computational cost and accuracy. An important application of the algorithm is to exponential integrators for ordinary differential equations. It is shown that the sums of the form ∑p k=0 ϕk(A)uk that arise in exponential integrators, where the ϕk are related to the exponential function, can be expressed in terms of a single exponential of a matrix of dimension n + p built by augmenting A with additional rows and columns, and the algorithm of this paper can therefore be employed.
THE EXPONENTIALLY CONVERGENT TRAPEZOIDAL RULE
"... Abstract. It is well known that the trapezoidal rule converges geometrically when applied to analytic functions on periodic intervals or the real line. The mathematics and history of this phenomenon are reviewed and it is shown that far from being a curiosity, it is linked with computational methods ..."
Abstract

Cited by 17 (3 self)
 Add to MetaCart
(Show Context)
Abstract. It is well known that the trapezoidal rule converges geometrically when applied to analytic functions on periodic intervals or the real line. The mathematics and history of this phenomenon are reviewed and it is shown that far from being a curiosity, it is linked with computational methods all across scientific computing, including algorithms related to inverse Laplace transforms, special functions, complex analysis, rational approximation, integral equations, and the computation of functions and eigenvalues of matrices and operators.
RESIDUAL, RESTARTING AND RICHARDSON ITERATION FOR THE MATRIX EXPONENTIAL
"... Abstract. A wellknown problem in computing some matrix functions iteratively is the lack of a clear, commonly accepted residual notion. An important matrix function for which this is the case is the matrix exponential. Suppose the matrix exponential of a given matrix times a given vector has to be ..."
Abstract

Cited by 12 (2 self)
 Add to MetaCart
Abstract. A wellknown problem in computing some matrix functions iteratively is the lack of a clear, commonly accepted residual notion. An important matrix function for which this is the case is the matrix exponential. Suppose the matrix exponential of a given matrix times a given vector has to be computed. We develop the approach of Druskin, Greenbaum and Knizhnerman (1998) and interpret the soughtafter vector as the value of a vector function satisfying the linear system of ordinary differential equations (ODE) whose coefficients form the given matrix. The residual is then defined with respect to the initial value problem for this ODE system. The residual introduced in this way can be seen as a backward error. We show how the residual can be computed efficiently within several iterative methods for the matrix exponential. This resolves the question of reliable stopping criteria for these methods. Further, we show that the residual concept can be used to construct new residualbased iterative methods. In particular, a variant of the Richardson method for the new residual appears to provide an efficient way to restart Krylov subspace methods for evaluating the matrix exponential.
A TRIGONOMETRIC METHOD FOR THE LINEAR STOCHASTIC WAVE EQUATION
"... Abstract. A fully discrete approximation of the linear stochastic wave equation driven by additive noise is presented. A standard finite element method is used for the spatial discretisation and a stochastic trigonometric scheme for the temporal approximation. This explicit time integrator allows fo ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
(Show Context)
Abstract. A fully discrete approximation of the linear stochastic wave equation driven by additive noise is presented. A standard finite element method is used for the spatial discretisation and a stochastic trigonometric scheme for the temporal approximation. This explicit time integrator allows for error bounds independent of the space discretisation and thus do not have a step size restriction as in the often used StörmerVerletleapfrog scheme. Moreover it enjoys a trace formula as does the exact solution of our problem. These favourable properties are demonstrated with numerical experiments.
Numerical study of blow up and stability of solutions of generalized KadomtsevPetviashvili equations
 J. Nonlinear Sci
"... Abstract. We first review the known mathematical results concerning the KP type equations. Then we perform numerical simulations to analyze various qualitative properties of the equations: blowup versus long time behavior, stability and instability of solitary waves. 1. ..."
Abstract

Cited by 5 (3 self)
 Add to MetaCart
(Show Context)
Abstract. We first review the known mathematical results concerning the KP type equations. Then we perform numerical simulations to analyze various qualitative properties of the equations: blowup versus long time behavior, stability and instability of solitary waves. 1.
An exponential integrator for a highly oscillatory Vlasov equation. Discrete
, 2014
"... Vlasov equation ..."
Efficient and stable Arnoldi restarts for matrix functions based on quadrature
, 2013
"... Abstract. When using the Arnoldi method for approximating f(A)b, the action of a matrix function on a vector, the maximum number of iterations that can be performed is often limited by the storage requirements of the full Arnoldi basis. As a remedy, different restarting algorithms have been proposed ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
(Show Context)
Abstract. When using the Arnoldi method for approximating f(A)b, the action of a matrix function on a vector, the maximum number of iterations that can be performed is often limited by the storage requirements of the full Arnoldi basis. As a remedy, different restarting algorithms have been proposed in the literature, none of which was universally applicable, efficient, and stable at the same time. We utilize an integral representation for the error of the iterates in the Arnoldi method which then allows us to develop an efficient quadraturebased restarting algorithm suitable for a large class of functions, including the socalled Stieltjes functions and the exponential function. Our method is applicable for functions of Hermitian and nonHermitian matrices, requires no apriori spectral information, and runs with essentially constant computational work per restart cycle. We comment on the relation of this new restarting approach to other existing algorithms and illustrate its efficiency and numerical stability by various numerical experiments.
A short guide to exponential Krylov subspace time integration for Maxwell’s equations
, 2012
"... The exponential time integration, i.e., time integration which involves the matrix exponential, is an attractive tool for solving Maxwell’s equations in time. However, its application in practice often requires a substantial knowledge of numerical linear algebra algorithms, in particular, of the Kr ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
(Show Context)
The exponential time integration, i.e., time integration which involves the matrix exponential, is an attractive tool for solving Maxwell’s equations in time. However, its application in practice often requires a substantial knowledge of numerical linear algebra algorithms, in particular, of the Krylov subspace methods. This note provides a brief guide on how to apply exponential Krylov subspace time integration in practice. Although we consider Maxwell’s equations, the guide can readily be used for other similar timedependent problems. In particular, we discuss in detail the Arnoldi shiftandinvert method combined with recently introduced residualbased stopping criterion. Two of the algorithms described here are available as MATLAB codes and can be downloaded from the website