Results 1  10
of
30
Exponential integrators
, 2010
"... In this paper we consider the construction, analysis, implementation and application of exponential integrators. The focus will be on two types of stiff problems. The first one is characterized by a Jacobian that possesses eigenvalues with large negative real parts. Parabolic partial differential eq ..."
Abstract

Cited by 67 (5 self)
 Add to MetaCart
In this paper we consider the construction, analysis, implementation and application of exponential integrators. The focus will be on two types of stiff problems. The first one is characterized by a Jacobian that possesses eigenvalues with large negative real parts. Parabolic partial differential equations and their spatial discretization are typical examples. The second class consists of highly oscillatory problems with purely imaginary eigenvalues of large modulus. Apart from motivating the construction of exponential integrators for various classes of problems, our main intention in this article is to present the mathematics behind these methods. We will derive error bounds that are independent of stiffness or highest frequencies in the system. Since the implementation of exponential integrators requires the evaluation of the product of a matrix function with a vector, we will briefly discuss some possible approaches as well. The paper concludes with some applications, in
Heterogeneous multiscale methods for stiff ordinary differential equations. 2003. Under review
"... Abstract. The heterogeneous multiscale methods (HMM) is a general framework for the numerical approximation of multiscale problems. It is here developed for ordinary differential equations containing different time scales. Stability and convergence results for the proposed HMM methods are presented ..."
Abstract

Cited by 45 (8 self)
 Add to MetaCart
(Show Context)
Abstract. The heterogeneous multiscale methods (HMM) is a general framework for the numerical approximation of multiscale problems. It is here developed for ordinary differential equations containing different time scales. Stability and convergence results for the proposed HMM methods are presented together with numerical tests. The analysis covers some existing methods and the new algorithms that are based on higherorder estimates of the effective force by kernels satisfying certain moment conditions and regularity properties. These new methods have superior computational complexity compared to traditional methods for stiff problems with oscillatory solutions.
The Magnus expansion and some of its applications
, 2008
"... Approximate resolution of linear systems of differential equations with varying coefficients is a recurrent problem shared by a number of scientific and engineering areas, ranging from Quantum Mechanics to Control Theory. When formulated in operator or matrix form, the Magnus expansion furnishes an ..."
Abstract

Cited by 29 (3 self)
 Add to MetaCart
Approximate resolution of linear systems of differential equations with varying coefficients is a recurrent problem shared by a number of scientific and engineering areas, ranging from Quantum Mechanics to Control Theory. When formulated in operator or matrix form, the Magnus expansion furnishes an elegant setting to built up approximate exponential representations of the solution of the system. It provides a power series expansion for the corresponding exponent and is sometimes referred to as TimeDependent Exponential Perturbation Theory. Every Magnus approximant corresponds in Perturbation Theory to a partial resummation of infinite terms with the important additional property of preserving at any order certain symmetries of the exact solution. The goal of this review is threefold. First, to collect a number of developments scattered through half a century of scientific literature on Magnus expansion. They concern the methods for the generation of terms in the expansion, estimates of the radius of convergence of the series, generalizations and related nonperturbative
Highly oscillatory quadrature: The story so far
 Proceeding of ENuMath, Santiago de Compostella (2006
, 2006
"... Summary. The last few years have witnessed substantive developments in the computation of highly oscillatory integrals in one or more dimensions. The availability of new asymptotic expansions and a Stokestype theorem allow for a comprehensive analysis of a number of old (although enhanced) and new ..."
Abstract

Cited by 20 (1 self)
 Add to MetaCart
(Show Context)
Summary. The last few years have witnessed substantive developments in the computation of highly oscillatory integrals in one or more dimensions. The availability of new asymptotic expansions and a Stokestype theorem allow for a comprehensive analysis of a number of old (although enhanced) and new quadrature techniques: the asymptotic, Filontype and Levintype methods. All these methods share the surprising property that their accuracy increases with growing oscillation. These developments are described in a unified fashion, taking the multivariate integral∫ Ω f(x)eiωg(x)dV as our point of departure. 1 The challenge of high oscillation Rapid oscillation is ubiquitous in applications and is, by common consent, considered a ‘difficult ’ problem. Indeed, the standard technique of dealing with high oscillation is to make it disappear by sampling the signal sufficiently frequently, and this typically leads to prohibitive cost. The subject of this article is a review of recent work on the computation of integrals of the form I[f,Ω] =
Numerical Integrators for Highly Oscillatory Hamiltonian Systems: A Review
"... Summary. Numerical methods for oscillatory, multiscale Hamiltonian systems are reviewed. The construction principles are described, and the algorithmic and analytical distinction between problems with nearly constant high frequencies and with time or statedependent frequencies is emphasized. Trig ..."
Abstract

Cited by 12 (1 self)
 Add to MetaCart
(Show Context)
Summary. Numerical methods for oscillatory, multiscale Hamiltonian systems are reviewed. The construction principles are described, and the algorithmic and analytical distinction between problems with nearly constant high frequencies and with time or statedependent frequencies is emphasized. Trigonometric integrators for the first case and adiabatic integrators for the second case are discussed in more detail. 1
On the Method of Neumann Series for Highly Oscillatory Equations
 BIT
, 2004
"... The main purpose of this paper is to describe and analyse techniques for the numerical solution of highily oscillatory ordinary di#erential equations by exploying a Neumann expansion. Once the variables in the di#erential system are changed with respect to a rapidly rotating frame of reference, the ..."
Abstract

Cited by 10 (3 self)
 Add to MetaCart
(Show Context)
The main purpose of this paper is to describe and analyse techniques for the numerical solution of highily oscillatory ordinary di#erential equations by exploying a Neumann expansion. Once the variables in the di#erential system are changed with respect to a rapidly rotating frame of reference, the Neumann method becomes very e#ective indeed. However, this e#ectiveness rests upon suitable quadrature of highly oscillatory multivariate integrals, and we devote part of this paper to describe how to accomplish this to high accuracy with a modest computational e#ort. 1
Think Globally, Act Locally: Solving HighlyOscillatory Ordinary Differential Equations
, 2001
"... In this paper we explore the solution of highlyoscillatory differential equations, with a special reference to the linear oscillator y + g(t)y = 0, where g(t) t!1 \Gamma! +1. Commencing from a globalerror formula, we explore the accumulation of the error by RungeKutta and Magnus methods. Mot ..."
Abstract

Cited by 10 (4 self)
 Add to MetaCart
In this paper we explore the solution of highlyoscillatory differential equations, with a special reference to the linear oscillator y + g(t)y = 0, where g(t) t!1 \Gamma! +1. Commencing from a globalerror formula, we explore the accumulation of the error by RungeKutta and Magnus methods. Motivated by our analysis, we present a modification of the Magnus method which results in substantially better performance.
A Magnus expansion for the equation . . .
, 2000
"... The subject matter of this paper is the representation of the solution of the linear differential equation Y 0 = AY \Gamma Y B, Y (0) = Y0 , in the form Y (t) = e\Omega\Gamma t) Y0 and the representation of the function\Omega as a generalisation of the classical Magnus expansion. An immediate a ..."
Abstract

Cited by 5 (1 self)
 Add to MetaCart
The subject matter of this paper is the representation of the solution of the linear differential equation Y 0 = AY \Gamma Y B, Y (0) = Y0 , in the form Y (t) = e\Omega\Gamma t) Y0 and the representation of the function\Omega as a generalisation of the classical Magnus expansion. An immediate application is a new recursive algorithm for the derivation of the BakerCampbellHausdorff formula and its symmetric generalisation. 1 Introduction This paper is concerned with the solution of the linear ordinary differential system Y 0 = AY \Gamma Y B; t 0; Y (0) = Y 0 ; (1.1) where both A and B are Lipschitz functions that map [0; 1) into Mm , the set of m \Theta m matrices, and Y 0 2 Mm . The equation (1.1) features in numerous applications and the approximation of its solution is of interest. Moreover, solutions of this equation often display interesting geometry. For example, B = A results in the isospectral flow Y 0 = AY \Gamma Y A; t 0; Y (0) = Y 0 ; (1.2) whose invariant...
A fourth order Magnus scheme for Helmholtz equation
 J. Comput. Appl. Math
, 2005
"... For wave propagation in a slowly varying waveguide, it is necessary to solve the Helmholtz equation in a domain that is much larger than the typical wavelength. Standard finite difference and finite element methods must resolve the small oscillatory behavior of the wave field and are prohibitively e ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
(Show Context)
For wave propagation in a slowly varying waveguide, it is necessary to solve the Helmholtz equation in a domain that is much larger than the typical wavelength. Standard finite difference and finite element methods must resolve the small oscillatory behavior of the wave field and are prohibitively expensive for practical applications. A popular method is to approximate the waveguide by segments that are uniform in the propagation direction and use separation of variables in each segment. For a slowly varying waveguide, it is possible that the length of such a segment is much larger than the typical wavelength. To reduce memory requirements, it is advantageous to reformulate the boundary value problem of the Helmholtz equation as an initial value problem using a pair of operators. Such an operatormarching scheme can also be solved with the piecewise uniform approximation of the waveguide. This is related to the second order midpoint exponential method for a system of linear ODEs. In this paper, we develop a fourth order operatormarching scheme for the Helmholtz equation using a fourth order Magnus method. 1
A Priori Estimates for the Global Error Committed by RungeKutta Methods for a Nonlinear Oscillator
 LMS J. Comput. Math
, 2001
"... The AlekseevGröbner lemma is combined with the theory of modified equations to obtain an a priori estimate for the global error of numerical integrators. This estimate is correct up to a remainder term of order h^2p where h denotes the step size and p the order of the method. It is applied to a cla ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
(Show Context)
The AlekseevGröbner lemma is combined with the theory of modified equations to obtain an a priori estimate for the global error of numerical integrators. This estimate is correct up to a remainder term of order h^2p where h denotes the step size and p the order of the method. It is applied to a class of nonautonomous linear oscillatory equations, which includes the Airy equation, thereby improving prior work which only gave the h^p term. However, the result is not very surprising. Next, a single nonlinear...