Results 1  10
of
22
The FourierSeries Method For Inverting Transforms Of Probability Distributions
, 1991
"... This paper reviews the Fourierseries method for calculating cumulative distribution functions (cdf's) and probability mass functions (pmf's) by numerically inverting characteristic functions, Laplace transforms and generating functions. Some variants of the Fourierseries method are remarkably easy ..."
Abstract

Cited by 149 (51 self)
 Add to MetaCart
This paper reviews the Fourierseries method for calculating cumulative distribution functions (cdf's) and probability mass functions (pmf's) by numerically inverting characteristic functions, Laplace transforms and generating functions. Some variants of the Fourierseries method are remarkably easy to use, requiring programs of less than fifty lines. The Fourierseries method can be interpreted as numerically integrating a standard inversion integral by means of the trapezoidal rule. The same formula is obtained by using the Fourier series of an associated periodic function constructed by aliasing; this explains the name of the method. This Fourier analysis applies to the inversion problem because the Fourier coefficients are just values of the transform. The mathematical centerpiece of the Fourierseries method is the Poisson summation formula, which identifies the discretization error associated with the trapezoidal rule and thus helps bound it. The greatest difficulty is approximately calculating the infinite series obtained from the inversion integral. Within this framework, lattice cdf's can be calculated from generating functions by finite sums without truncation. For other cdf's, an appropriate truncation of the infinite series can be determined from the transform based on estimates or bounds. For Laplace transforms, the numerical integration can be made to produce a nearly alternating series, so that the convergence can be accelerated by techniques such as Euler summation. Alternatively, the cdf can be perturbed slightly by convolution smoothing or windowing to produce a truncation error bound independent of the original cdf. Although error bounds can be determined, an effective approach is to use two different methods without elaborate error analysis. For this...
A chronology of interpolation: From ancient astronomy to modern signal and image processing
 Proceedings of the IEEE
, 2002
"... This paper presents a chronological overview of the developments in interpolation theory, from the earliest times to the present date. It brings out the connections between the results obtained in different ages, thereby putting the techniques currently used in signal and image processing into histo ..."
Abstract

Cited by 62 (0 self)
 Add to MetaCart
This paper presents a chronological overview of the developments in interpolation theory, from the earliest times to the present date. It brings out the connections between the results obtained in different ages, thereby putting the techniques currently used in signal and image processing into historical perspective. A summary of the insights and recommendations that follow from relatively recent theoretical as well as experimental studies concludes the presentation. Keywords—Approximation, convolutionbased interpolation, history, image processing, polynomial interpolation, signal processing, splines. “It is an extremely useful thing to have knowledge of the true origins of memorable discoveries, especially those that have been found not by accident but by dint of meditation. It is not so much that thereby history may attribute to each man his own discoveries and others should be encouraged to earn like commendation, as that the art of making discoveries should be extended by considering noteworthy examples of it. ” 1 I.
COMPUTING A α, log(A) AND RELATED MATRIX FUNCTIONS BY CONTOUR INTEGRALS
, 2007
"... Abstract. New methods are proposed for the numerical evaluation of f(A) or f(A)b, where f(A) is a function such as A 1/2 or log(A) with singularities in (−∞, 0] and A is a matrix with eigenvalues on or near (0, ∞). The methods are based on combining contour integrals evaluated by the periodic trapez ..."
Abstract

Cited by 11 (1 self)
 Add to MetaCart
Abstract. New methods are proposed for the numerical evaluation of f(A) or f(A)b, where f(A) is a function such as A 1/2 or log(A) with singularities in (−∞, 0] and A is a matrix with eigenvalues on or near (0, ∞). The methods are based on combining contour integrals evaluated by the periodic trapezoid rule with conformal maps involving Jacobi elliptic functions. The convergence is geometric, so that the computation of f(A)b is typically reduced to one or two dozen linear system solves, which can be carried out in parallel. Key words. Cauchy integral, conformal map, contour integral, matrix function, quadrature, rational approximation, trapezoid rule AMS subject classifications. 65F30, 65D30 1. Introduction. It
Discrete adjoint approximations with shocks
 CONFERENCE ON HYPERBOLIC PROBLEMS
, 2002
"... In recent years there has been considerable research into the use of adjoint flow equations for design optimisation (e.g. [Jam95]) and error analysis (e.g. [PG00, BR01]). In almost every case, the adjoint equations have been formulated under the assumption that the original nonlinear flow solution i ..."
Abstract

Cited by 9 (3 self)
 Add to MetaCart
In recent years there has been considerable research into the use of adjoint flow equations for design optimisation (e.g. [Jam95]) and error analysis (e.g. [PG00, BR01]). In almost every case, the adjoint equations have been formulated under the assumption that the original nonlinear flow solution is smooth. Since most applications have been for incompressible or subsonic flow, this has been valid, however there is now increasing use of such techniques in transonic design applications for which there are shocks. It is therefore of interest to investigate the formulation and discretisation of adjoint equations when in the presence of shocks.
The reason that shocks present a problem is that the adjoint equations are defined to be adjoint to the equations obtained by linearising the original nonlinear flow equations. Therefore, this raises the whole issue of linearised perturbations to the shock. The validity of linearised shock capturing for harmonically oscillating shocks in flutter analysis was investigated by Lindquist and Giles [LG94] who showed that the shock capturing produces the correct prediction of integral quantities such as unsteady lift and moment provided the shock is smeared over a number of grid points. As a result, linearised shock capturing is now the standard method of turbomachinery aeroelastic analysis [HCL94], benefitting from the computational advantages of the linearised approach, without the many drawbacks of shock fitting.
There has been very little prior research into adjoint equations for flows with shocks. Giles and Pierce [GP01] have shown that the analytic derivation of the adjoint equations for the steady quasionedimensional Euler equations requires the specification of an internal adjoint boundary condition at the shock. However, the numerical evidence [GP98] is that the correct adjoint solution is obtained using either the "fully discrete" approach (in which one linearises the discrete equations and uses the transpose) or the "continuous" approach (in which one discretises the analytic adjoint equations). It is not
clear though that this will remain true in two dimensions, for which there is a similar adjoint boundary condition along a shock.
In this paper, we consider unsteady onedimensional hyperbolic equations with a convex scalar flux, and in particular obtain numerical results for Burgers equation. Tadmor [Tad91] developed a Lip' topology for the formulation of adjoint equations for this problem, with application to linear postprocessing functionals. Building on this and the work of Bouchut and James [BJ98], Ulbrich has very recently introduced the concept of shiftdifferentiability [Ulb02a, Ulb02b] to handle nonlinear functionals of the type considered in this paper. This supplies the analytic adjoint solution against which the numerical solutions in this paper will be compared. An alternative derivation of this analytic solution is presented in an expanded version of this paper [Gil02].
Fast RungeKutta approximation of inhomogeneous parabolic equations
 NUMER. MATH
, 2005
"... The result after N steps of an implicit RungeKutta time discretization of an inhomogeneous linear parabolic differential equation is computed, up to accuracy ε, by solving only O log N log 1 ε linear systems of equations. We derive, analyse, and numerically illustrate this fast algorithm. ..."
Abstract

Cited by 6 (5 self)
 Add to MetaCart
The result after N steps of an implicit RungeKutta time discretization of an inhomogeneous linear parabolic differential equation is computed, up to accuracy ε, by solving only O log N log 1 ε linear systems of equations. We derive, analyse, and numerically illustrate this fast algorithm.
Summary of Sinc Numerical Methods
"... Sinc approximation methods excel for problems whose solutions may have singularities, or infinite domains, or boundary layers. This article summarizes results obtained to date, on Sinc numerical methods of computation. Sinc methods provide procedures for function approximation over bounded or unbou ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
Sinc approximation methods excel for problems whose solutions may have singularities, or infinite domains, or boundary layers. This article summarizes results obtained to date, on Sinc numerical methods of computation. Sinc methods provide procedures for function approximation over bounded or unbounded regions, encompassing interpolation, approximation of derivatives, approximate definite and indefinite integration, solving initial value ordinary differential equation problems, approximation and inversion of Fourier and Laplace transforms, approximation of Hilbert transforms, and approximation of indefinite convolutions, the approximate solution of partial differential equations, and the approximate solution of integral equations, methods for constructing conformal maps, and methods for analytic continuation. Indeed, Sinc are ubiquitous for approximating every operation of calculus. 1 Introduction and Summary This article attempts to summarize the existing numerical methods based on ...
Adaptive, fast and oblivious convolution in evolution equations with memory
, 2006
"... Abstract. To approximate convolutions which occur in evolution equations with memory terms, a variablestepsize algorithm is presented for which advancing N steps requires only O(N log N) operations and O(log N) active memory, in place of O(N 2) operations and O(N) memory for a direct implementation ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
Abstract. To approximate convolutions which occur in evolution equations with memory terms, a variablestepsize algorithm is presented for which advancing N steps requires only O(N log N) operations and O(log N) active memory, in place of O(N 2) operations and O(N) memory for a direct implementation. A basic feature of the fast algorithm is the reduction, via contour integral representations, to differential equations which are solved numerically with adaptive step sizes. Rather than the kernel itself, its Laplace transform is used in the algorithm. The algorithm is illustrated on three examples: a blowup example originating from a Schrödinger equation with concentrated nonlinearity, chemical reactions with inhibited diffusion, and viscoelasticity with a fractional order constitutive law.
The SincGalerkin Schwarz Alternating Method for Poisson's Equation
 in Computation and Control IV
, 1995
"... this paper is to develop the SincGalerkin Schwarz alternating method for Poisson's equation on a rectangle. N. LYBECK AND K. BOWERS ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
this paper is to develop the SincGalerkin Schwarz alternating method for Poisson's equation on a rectangle. N. LYBECK AND K. BOWERS
A SYSTEM OF ODES FOR A PERTURBATION OF A MINIMAL MASS SOLITON
"... Abstract. We study soliton solutions to the nonlinear Schrödinger equation (NLS) with a saturated nonlinearity. NLS with such a nonlinearity is known to possess a minimal mass soliton. We consider a small perturbation of a minimal mass soliton and identify a system of ODEs extending the work of [20] ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
Abstract. We study soliton solutions to the nonlinear Schrödinger equation (NLS) with a saturated nonlinearity. NLS with such a nonlinearity is known to possess a minimal mass soliton. We consider a small perturbation of a minimal mass soliton and identify a system of ODEs extending the work of [20], which models the behavior of the perturbation for short times. We then provide numerical evidence that under this system of ODEs there are two possible dynamical outcomes, in accord with the conclusions of [43]. Generically, initial data which supports a soliton structure appears to oscillate, with oscillations centered on a stable soliton. For initial data which is expected to disperse, the finite dimensional dynamics initially follow the unstable portion of the soliton curve. 1.
Sinc Methods for Domain Decomposition
 App. Math. Cornput
"... . Sinc basis functions form a desirable basis to use for solving singular problems via domain decomposition. This is because both the SincGalerkin and sinccollocationmethods converge exponentially, even in the presence of boundary singularities. This work deals with sinc methods for secondorder o ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
. Sinc basis functions form a desirable basis to use for solving singular problems via domain decomposition. This is because both the SincGalerkin and sinccollocationmethods converge exponentially, even in the presence of boundary singularities. This work deals with sinc methods for secondorder ordinary differential equations with homogeneous Dirichlet boundary conditions. Both sinccollocation and SincGalerkin methods are presented. The two traditional methods of domain decomposition, overlapping and patching, are described. Numerical results are presented for both methods that exhibit the nearly identical errors achieved whether one uses the sinccollocation or SincGalerkin method. Key words. domain decomposition, overlapping, patching, sinccollocation, SincGalerkin AMS subject classifications. 65L10, 65L50 1. Introduction. Sinc methods for differential equations were originally introduced in Stenger's paper [15]. In the interest of limiting the computational effort involve...