Results 1  10
of
15
Heterogeneous multiscale methods for stiff ordinary differential equations. 2003. Under review
"... Abstract. The heterogeneous multiscale methods (HMM) is a general framework for the numerical approximation of multiscale problems. It is here developed for ordinary differential equations containing different time scales. Stability and convergence results for the proposed HMM methods are presented ..."
Abstract

Cited by 16 (4 self)
 Add to MetaCart
Abstract. The heterogeneous multiscale methods (HMM) is a general framework for the numerical approximation of multiscale problems. It is here developed for ordinary differential equations containing different time scales. Stability and convergence results for the proposed HMM methods are presented together with numerical tests. The analysis covers some existing methods and the new algorithms that are based on higherorder estimates of the effective force by kernels satisfying certain moment conditions and regularity properties. These new methods have superior computational complexity compared to traditional methods for stiff problems with oscillatory solutions.
Exponential integrators
, 2010
"... In this paper we consider the construction, analysis, implementation and application of exponential integrators. The focus will be on two types of stiff problems. The first one is characterized by a Jacobian that possesses eigenvalues with large negative real parts. Parabolic partial differential eq ..."
Abstract

Cited by 16 (1 self)
 Add to MetaCart
In this paper we consider the construction, analysis, implementation and application of exponential integrators. The focus will be on two types of stiff problems. The first one is characterized by a Jacobian that possesses eigenvalues with large negative real parts. Parabolic partial differential equations and their spatial discretization are typical examples. The second class consists of highly oscillatory problems with purely imaginary eigenvalues of large modulus. Apart from motivating the construction of exponential integrators for various classes of problems, our main intention in this article is to present the mathematics behind these methods. We will derive error bounds that are independent of stiffness or highest frequencies in the system. Since the implementation of exponential integrators requires the evaluation of the product of a matrix function with a vector, we will briefly discuss some possible approaches as well. The paper concludes with some applications, in
Think Globally, Act Locally: Solving HighlyOscillatory Ordinary Differential Equations
, 2001
"... In this paper we explore the solution of highlyoscillatory differential equations, with a special reference to the linear oscillator y + g(t)y = 0, where g(t) t!1 \Gamma! +1. Commencing from a globalerror formula, we explore the accumulation of the error by RungeKutta and Magnus methods. Mot ..."
Abstract

Cited by 8 (3 self)
 Add to MetaCart
In this paper we explore the solution of highlyoscillatory differential equations, with a special reference to the linear oscillator y + g(t)y = 0, where g(t) t!1 \Gamma! +1. Commencing from a globalerror formula, we explore the accumulation of the error by RungeKutta and Magnus methods. Motivated by our analysis, we present a modification of the Magnus method which results in substantially better performance.
The Magnus expansion and some of its applications
, 2008
"... Approximate resolution of linear systems of differential equations with varying coefficients is a recurrent problem shared by a number of scientific and engineering areas, ranging from Quantum Mechanics to Control Theory. When formulated in operator or matrix form, the Magnus expansion furnishes an ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
Approximate resolution of linear systems of differential equations with varying coefficients is a recurrent problem shared by a number of scientific and engineering areas, ranging from Quantum Mechanics to Control Theory. When formulated in operator or matrix form, the Magnus expansion furnishes an elegant setting to built up approximate exponential representations of the solution of the system. It provides a power series expansion for the corresponding exponent and is sometimes referred to as TimeDependent Exponential Perturbation Theory. Every Magnus approximant corresponds in Perturbation Theory to a partial resummation of infinite terms with the important additional property of preserving at any order certain symmetries of the exact solution. The goal of this review is threefold. First, to collect a number of developments scattered through half a century of scientific literature on Magnus expansion. They concern the methods for the generation of terms in the expansion, estimates of the radius of convergence of the series, generalizations and related nonperturbative
On the Method of Neumann Series for Highly Oscillatory Equations
 BIT
, 2004
"... The main purpose of this paper is to describe and analyse techniques for the numerical solution of highily oscillatory ordinary di#erential equations by exploying a Neumann expansion. Once the variables in the di#erential system are changed with respect to a rapidly rotating frame of reference, the ..."
Abstract

Cited by 6 (1 self)
 Add to MetaCart
The main purpose of this paper is to describe and analyse techniques for the numerical solution of highily oscillatory ordinary di#erential equations by exploying a Neumann expansion. Once the variables in the di#erential system are changed with respect to a rapidly rotating frame of reference, the Neumann method becomes very e#ective indeed. However, this e#ectiveness rests upon suitable quadrature of highly oscillatory multivariate integrals, and we devote part of this paper to describe how to accomplish this to high accuracy with a modest computational e#ort. 1
Numerical Integrators for Highly Oscillatory Hamiltonian Systems: A Review
"... Summary. Numerical methods for oscillatory, multiscale Hamiltonian systems are reviewed. The construction principles are described, and the algorithmic and analytical distinction between problems with nearly constant high frequencies and with time or statedependent frequencies is emphasized. Trig ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
Summary. Numerical methods for oscillatory, multiscale Hamiltonian systems are reviewed. The construction principles are described, and the algorithmic and analytical distinction between problems with nearly constant high frequencies and with time or statedependent frequencies is emphasized. Trigonometric integrators for the first case and adiabatic integrators for the second case are discussed in more detail. 1
A fourth order Magnus scheme for Helmholtz equation
 J. Comput. Appl. Math
, 2005
"... For wave propagation in a slowly varying waveguide, it is necessary to solve the Helmholtz equation in a domain that is much larger than the typical wavelength. Standard finite difference and finite element methods must resolve the small oscillatory behavior of the wave field and are prohibitively e ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
For wave propagation in a slowly varying waveguide, it is necessary to solve the Helmholtz equation in a domain that is much larger than the typical wavelength. Standard finite difference and finite element methods must resolve the small oscillatory behavior of the wave field and are prohibitively expensive for practical applications. A popular method is to approximate the waveguide by segments that are uniform in the propagation direction and use separation of variables in each segment. For a slowly varying waveguide, it is possible that the length of such a segment is much larger than the typical wavelength. To reduce memory requirements, it is advantageous to reformulate the boundary value problem of the Helmholtz equation as an initial value problem using a pair of operators. Such an operatormarching scheme can also be solved with the piecewise uniform approximation of the waveguide. This is related to the second order midpoint exponential method for a system of linear ODEs. In this paper, we develop a fourth order operatormarching scheme for the Helmholtz equation using a fourth order Magnus method. 1
A Magnus expansion for the equation . . .
, 2000
"... The subject matter of this paper is the representation of the solution of the linear differential equation Y 0 = AY \Gamma Y B, Y (0) = Y0 , in the form Y (t) = e\Omega\Gamma t) Y0 and the representation of the function\Omega as a generalisation of the classical Magnus expansion. An immediate a ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
The subject matter of this paper is the representation of the solution of the linear differential equation Y 0 = AY \Gamma Y B, Y (0) = Y0 , in the form Y (t) = e\Omega\Gamma t) Y0 and the representation of the function\Omega as a generalisation of the classical Magnus expansion. An immediate application is a new recursive algorithm for the derivation of the BakerCampbellHausdorff formula and its symmetric generalisation. 1 Introduction This paper is concerned with the solution of the linear ordinary differential system Y 0 = AY \Gamma Y B; t 0; Y (0) = Y 0 ; (1.1) where both A and B are Lipschitz functions that map [0; 1) into Mm , the set of m \Theta m matrices, and Y 0 2 Mm . The equation (1.1) features in numerous applications and the approximation of its solution is of interest. Moreover, solutions of this equation often display interesting geometry. For example, B = A results in the isospectral flow Y 0 = AY \Gamma Y A; t 0; Y (0) = Y 0 ; (1.2) whose invariant...
A Priori Estimates for the Global Error Committed by RungeKutta Methods for a Nonlinear Oscillator
 LMS J. Comput. Math
, 2001
"... The AlekseevGröbner lemma is combined with the theory of modified equations to obtain an a priori estimate for the global error of numerical integrators. This estimate is correct up to a remainder term of order h^2p where h denotes the step size and p the order of the method. It is applied to a cla ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
The AlekseevGröbner lemma is combined with the theory of modified equations to obtain an a priori estimate for the global error of numerical integrators. This estimate is correct up to a remainder term of order h^2p where h denotes the step size and p the order of the method. It is applied to a class of nonautonomous linear oscillatory equations, which includes the Airy equation, thereby improving prior work which only gave the h^p term. However, the result is not very surprising. Next, a single nonlinear...
On the global error of discretization methods for ordinary differential equations
, 2004
"... Discretization methods for ordinary differential equations are usually not exact; they commit an error at every step of the algorithm. All these errors combine to form the global error, which is the error in the final result. The global error is the subject of this thesis. In the first half of the t ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Discretization methods for ordinary differential equations are usually not exact; they commit an error at every step of the algorithm. All these errors combine to form the global error, which is the error in the final result. The global error is the subject of this thesis. In the first half of the thesis, accurate a priori estimates of the global error are derived. Three different approaches are followed: to combine the effects of the errors committed at every step, to expand the global error in an asymptotic series in the step size, and to use the theory of modified equations. The last approach, which is often the most useful one, yields an estimate which is correct up to a term of order h 2p, where h denotes the step size and p the order of the numerical method. This result is then applied to estimate the global error for the Airy equation (and related oscillators that obey the Liouville–Green approximation) and the Emden–Fowler equation. The latter example has the interesting feature that it is not sufficient to consider only the leading global error term, because subsequent terms of higher order in the step size may grow faster in time. The second half of the thesis concentrates on minimizing the global error by varying the step size. It is argued that the correct objective function is the norm of the global error over the entire integration interval. Specifically, the L2 norm and the L ∞ norm are studied. In the former case, Pontryagin’s Minimum Principle converts the problem to a boundary value problem, which may be solved analytically or numerically. When the L ∞ norm is used, a boundary value problem with a complementarity condition results. Alternatively, the Exterior Penalty Method may be employed to get a boundary value problem without complementarity condition, which can be solved by standard numerical software. The theory is illustrated by calculating the optimal step size for solving the Dahlquist test equation and the Kepler problem. i