Results 1  10
of
43
The FourierSeries Method For Inverting Transforms Of Probability Distributions
, 1991
"... This paper reviews the Fourierseries method for calculating cumulative distribution functions (cdf's) and probability mass functions (pmf's) by numerically inverting characteristic functions, Laplace transforms and generating functions. Some variants of the Fourierseries method are remar ..."
Abstract

Cited by 208 (52 self)
 Add to MetaCart
This paper reviews the Fourierseries method for calculating cumulative distribution functions (cdf's) and probability mass functions (pmf's) by numerically inverting characteristic functions, Laplace transforms and generating functions. Some variants of the Fourierseries method are remarkably easy to use, requiring programs of less than fifty lines. The Fourierseries method can be interpreted as numerically integrating a standard inversion integral by means of the trapezoidal rule. The same formula is obtained by using the Fourier series of an associated periodic function constructed by aliasing; this explains the name of the method. This Fourier analysis applies to the inversion problem because the Fourier coefficients are just values of the transform. The mathematical centerpiece of the Fourierseries method is the Poisson summation formula, which identifies the discretization error associated with the trapezoidal rule and thus helps bound it. The greatest difficulty is approximately calculating the infinite series obtained from the inversion integral. Within this framework, lattice cdf's can be calculated from generating functions by finite sums without truncation. For other cdf's, an appropriate truncation of the infinite series can be determined from the transform based on estimates or bounds. For Laplace transforms, the numerical integration can be made to produce a nearly alternating series, so that the convergence can be accelerated by techniques such as Euler summation. Alternatively, the cdf can be perturbed slightly by convolution smoothing or windowing to produce a truncation error bound independent of the original cdf. Although error bounds can be determined, an effective approach is to use two different methods without elaborate error analysis. For this...
A chronology of interpolation: From ancient astronomy to modern signal and image processing
 Proceedings of the IEEE
, 2002
"... This paper presents a chronological overview of the developments in interpolation theory, from the earliest times to the present date. It brings out the connections between the results obtained in different ages, thereby putting the techniques currently used in signal and image processing into histo ..."
Abstract

Cited by 102 (0 self)
 Add to MetaCart
(Show Context)
This paper presents a chronological overview of the developments in interpolation theory, from the earliest times to the present date. It brings out the connections between the results obtained in different ages, thereby putting the techniques currently used in signal and image processing into historical perspective. A summary of the insights and recommendations that follow from relatively recent theoretical as well as experimental studies concludes the presentation. Keywords—Approximation, convolutionbased interpolation, history, image processing, polynomial interpolation, signal processing, splines. “It is an extremely useful thing to have knowledge of the true origins of memorable discoveries, especially those that have been found not by accident but by dint of meditation. It is not so much that thereby history may attribute to each man his own discoveries and others should be encouraged to earn like commendation, as that the art of making discoveries should be extended by considering noteworthy examples of it. ” 1 I.
Summary of Sinc Numerical Methods
"... Sinc approximation methods excel for problems whose solutions may have singularities, or infinite domains, or boundary layers. This article summarizes results obtained to date, on Sinc numerical methods of computation. Sinc methods provide procedures for function approximation over bounded or unbou ..."
Abstract

Cited by 24 (0 self)
 Add to MetaCart
Sinc approximation methods excel for problems whose solutions may have singularities, or infinite domains, or boundary layers. This article summarizes results obtained to date, on Sinc numerical methods of computation. Sinc methods provide procedures for function approximation over bounded or unbounded regions, encompassing interpolation, approximation of derivatives, approximate definite and indefinite integration, solving initial value ordinary differential equation problems, approximation and inversion of Fourier and Laplace transforms, approximation of Hilbert transforms, and approximation of indefinite convolutions, the approximate solution of partial differential equations, and the approximate solution of integral equations, methods for constructing conformal maps, and methods for analytic continuation. Indeed, Sinc are ubiquitous for approximating every operation of calculus. 1 Introduction and Summary This article attempts to summarize the existing numerical methods based on ...
Discrete adjoint approximations with shocks
 CONFERENCE ON HYPERBOLIC PROBLEMS
, 2002
"... In recent years there has been considerable research into the use of adjoint flow equations for design optimisation (e.g. [Jam95]) and error analysis (e.g. [PG00, BR01]). In almost every case, the adjoint equations have been formulated under the assumption that the original nonlinear flow solution i ..."
Abstract

Cited by 14 (6 self)
 Add to MetaCart
In recent years there has been considerable research into the use of adjoint flow equations for design optimisation (e.g. [Jam95]) and error analysis (e.g. [PG00, BR01]). In almost every case, the adjoint equations have been formulated under the assumption that the original nonlinear flow solution is smooth. Since most applications have been for incompressible or subsonic flow, this has been valid, however there is now increasing use of such techniques in transonic design applications for which there are shocks. It is therefore of interest to investigate the formulation and discretisation of adjoint equations when in the presence of shocks.
The reason that shocks present a problem is that the adjoint equations are defined to be adjoint to the equations obtained by linearising the original nonlinear flow equations. Therefore, this raises the whole issue of linearised perturbations to the shock. The validity of linearised shock capturing for harmonically oscillating shocks in flutter analysis was investigated by Lindquist and Giles [LG94] who showed that the shock capturing produces the correct prediction of integral quantities such as unsteady lift and moment provided the shock is smeared over a number of grid points. As a result, linearised shock capturing is now the standard method of turbomachinery aeroelastic analysis [HCL94], benefitting from the computational advantages of the linearised approach, without the many drawbacks of shock fitting.
There has been very little prior research into adjoint equations for flows with shocks. Giles and Pierce [GP01] have shown that the analytic derivation of the adjoint equations for the steady quasionedimensional Euler equations requires the specification of an internal adjoint boundary condition at the shock. However, the numerical evidence [GP98] is that the correct adjoint solution is obtained using either the "fully discrete" approach (in which one linearises the discrete equations and uses the transpose) or the "continuous" approach (in which one discretises the analytic adjoint equations). It is not
clear though that this will remain true in two dimensions, for which there is a similar adjoint boundary condition along a shock.
In this paper, we consider unsteady onedimensional hyperbolic equations with a convex scalar flux, and in particular obtain numerical results for Burgers equation. Tadmor [Tad91] developed a Lip' topology for the formulation of adjoint equations for this problem, with application to linear postprocessing functionals. Building on this and the work of Bouchut and James [BJ98], Ulbrich has very recently introduced the concept of shiftdifferentiability [Ulb02a, Ulb02b] to handle nonlinear functionals of the type considered in this paper. This supplies the analytic adjoint solution against which the numerical solutions in this paper will be compared. An alternative derivation of this analytic solution is presented in an expanded version of this paper [Gil02].
Fast RungeKutta approximation of inhomogeneous parabolic equations
 NUMER. MATH
, 2005
"... The result after N steps of an implicit RungeKutta time discretization of an inhomogeneous linear parabolic differential equation is computed, up to accuracy ε, by solving only O log N log 1 ε linear systems of equations. We derive, analyse, and numerically illustrate this fast algorithm. ..."
Abstract

Cited by 10 (5 self)
 Add to MetaCart
The result after N steps of an implicit RungeKutta time discretization of an inhomogeneous linear parabolic differential equation is computed, up to accuracy ε, by solving only O log N log 1 ε linear systems of equations. We derive, analyse, and numerically illustrate this fast algorithm.
COMPUTING A α, log(A) AND RELATED MATRIX FUNCTIONS BY CONTOUR INTEGRALS
, 2007
"... New methods are proposed for the numerical evaluation of f(A) or f(A)b, where f(A) is a function such as A 1/2 or log(A) with singularities in (−∞, 0] and A is a matrix with eigenvalues on or near (0, ∞). The methods are based on combining contour integrals evaluated by the periodic trapezoid rule ..."
Abstract

Cited by 10 (0 self)
 Add to MetaCart
(Show Context)
New methods are proposed for the numerical evaluation of f(A) or f(A)b, where f(A) is a function such as A 1/2 or log(A) with singularities in (−∞, 0] and A is a matrix with eigenvalues on or near (0, ∞). The methods are based on combining contour integrals evaluated by the periodic trapezoid rule with conformal maps involving Jacobi elliptic functions. The convergence is geometric, so that the computation of f(A)b is typically reduced to one or two dozen linear system solves, which can be carried out in parallel.
Sumaccelerated pseudospectral methods: the Euleraccelerated sinc algorithm
 Appl. Numer. Math
, 1991
"... Pseudospectral discretizations of differential equations are much more accurate than finite differences for the same number of grid points N. The reason is that derivatives are approximated by a weighted sum of all N values of u(x,), rather than just three as in a secondorder finite difference. The ..."
Abstract

Cited by 7 (2 self)
 Add to MetaCart
(Show Context)
Pseudospectral discretizations of differential equations are much more accurate than finite differences for the same number of grid points N. The reason is that derivatives are approximated by a weighted sum of all N values of u(x,), rather than just three as in a secondorder finite difference. The price is that the N X N pseudospectral matrix is dense with N nonzero elements (rather than three) in each row. Truncating the pseudospectral sums to create a sparse discretization fails because the derivative series are alternating and very slowly convergent. However, these series are perfect candidates for sumacceleration methods. We show that the Euler summation can be applied to a standard pseudospectral scheme to produce an algorithm which is both exponentially accurate (like any other spectral method) and yet generates sparse matrices (like a finite difference method). For illustration, we use the sine basis with an evenly spaced grid on x E [ co, 001. However, the same techniques apply equally well to Chebyshev and Fourier polynomials. 1.
A SYSTEM OF ODES FOR A PERTURBATION OF A MINIMAL MASS SOLITON
"... Abstract. We study soliton solutions to the nonlinear Schrödinger equation (NLS) with a saturated nonlinearity. NLS with such a nonlinearity is known to possess a minimal mass soliton. We consider a small perturbation of a minimal mass soliton and identify a system of ODEs extending the work of [20] ..."
Abstract

Cited by 6 (4 self)
 Add to MetaCart
(Show Context)
Abstract. We study soliton solutions to the nonlinear Schrödinger equation (NLS) with a saturated nonlinearity. NLS with such a nonlinearity is known to possess a minimal mass soliton. We consider a small perturbation of a minimal mass soliton and identify a system of ODEs extending the work of [20], which models the behavior of the perturbation for short times. We then provide numerical evidence that under this system of ODEs there are two possible dynamical outcomes, in accord with the conclusions of [43]. Generically, initial data which supports a soliton structure appears to oscillate, with oscillations centered on a stable soliton. For initial data which is expected to disperse, the finite dimensional dynamics initially follow the unstable portion of the soliton curve. 1.
Adaptive, fast and oblivious convolution in evolution equations with memory
 SIAM J. Sci. Comput
, 2008
"... Abstract. To approximate convolutions which occur in evolution equations with memory terms, a variablestepsize algorithm is presented for which advancing N steps requires only O(N log N) operations and O(log N) active memory, in place of O(N2) operations and O(N) memory for a direct implementation ..."
Abstract

Cited by 5 (1 self)
 Add to MetaCart
(Show Context)
Abstract. To approximate convolutions which occur in evolution equations with memory terms, a variablestepsize algorithm is presented for which advancing N steps requires only O(N log N) operations and O(log N) active memory, in place of O(N2) operations and O(N) memory for a direct implementation. A basic feature of the fast algorithm is the reduction, via contour integral representations, to dierential equations which are solved numerically with adaptive step sizes. Rather than the kernel itself, its Laplace transform is used in the algorithm. The algorithm is illustrated on three examples: a blowup example originating from a Schrodinger equation with concentrated nonlinearity, chemical reactions with inhibited diusion, and viscoelasticity with a fractional order constitutive law.
Solitary wave benchmarks in magma dynamics
 J. Sci. Comput
"... We present a model problem for benchmarking codes that investigate magma migration in the Earth’s interior. This system retains the essential features of more sophisticated models, yet has the advantage of possessing solitary wave solutions. The existence of such exact solutions to the nonlinear pr ..."
Abstract

Cited by 5 (1 self)
 Add to MetaCart
We present a model problem for benchmarking codes that investigate magma migration in the Earth’s interior. This system retains the essential features of more sophisticated models, yet has the advantage of possessing solitary wave solutions. The existence of such exact solutions to the nonlinear problem make it an excellent benchmark problem for combinations of solver algorithms. In this work, we explore a novel algorithm for computing high quality approximations of the solitary waves in 1,2and 3 dimensions and use them to benchmark a semiLagrangian CrankNicholson scheme for a finite element discretization of the time dependent problem. 1