Results 1  10
of
19
ON THE NUMERICAL EVALUATION OF FREDHOLM DETERMINANTS
, 804
"... Abstract. Some significant quantities in mathematics and physics are most naturally expressed as the Fredholm determinant of an integral operator, most notably many of the distribution functions in random matrix theory. Though their numerical values are of interest, there is no systematic numerical ..."
Abstract

Cited by 43 (6 self)
 Add to MetaCart
Abstract. Some significant quantities in mathematics and physics are most naturally expressed as the Fredholm determinant of an integral operator, most notably many of the distribution functions in random matrix theory. Though their numerical values are of interest, there is no systematic numerical treatment of Fredholm determinants to be found in the literature. Instead, the few numerical evaluations that are available rely on eigenfunction expansions of the operator, if expressible in terms of special functions, or on alternative, numerically more straightforwardly accessible analytic expressions, e.g., in terms of Painlevé transcendents, that have masterfully been derived in some cases. In this paper we close the gap in the literature by studying projection methods and, above all, a simple, easily implementable, general method for the numerical evaluation of Fredholm determinants that is derived from the classical Nyström method for the solution of Fredholm equations of the second kind. Using Gauss–Legendre or Clenshaw– Curtis as the underlying quadrature rule, we prove that the approximation error essentially behaves like the quadrature error for the sections of the kernel. In particular, we get exponential convergence for analytic kernels, which are typical in random matrix theory. The application of the method to the distribution functions of the Gaussian unitary ensemble (GUE), in the bulk and the edge scaling limit, is discussed in detail. After extending the method to systems of integral operators, we evaluate the twopoint correlation functions of the more recently studied Airy and Airy 1 processes. Key words. Fredholm determinant, Nyström’s method, projection method, trace class operators, random
On the Numerical Evaluation of Distributions in Random Matrix Theory: A Review
, 2010
"... Abstract. In this paper we review and compare the numerical evaluation of those probability distributions in random matrix theory that are analytically represented in terms of Painlevé transcendents or Fredholm determinants. Concrete examples for the Gaussian and Laguerre (Wishart) βensembles and t ..."
Abstract

Cited by 36 (5 self)
 Add to MetaCart
Abstract. In this paper we review and compare the numerical evaluation of those probability distributions in random matrix theory that are analytically represented in terms of Painlevé transcendents or Fredholm determinants. Concrete examples for the Gaussian and Laguerre (Wishart) βensembles and their various scaling limits are discussed. We argue that the numerical approximation of Fredholm determinants is the conceptually more simple and efficient of the two approaches, easily generalized to the computation of joint probabilities and correlations. Having the means for extensive numerical explorations at hand, we discovered new and surprising determinantal formulae for the kth largest (or smallest) level in the edge scaling limits of the Orthogonal and Symplectic Ensembles; formulae that in turn led to improved numerical evaluations. The paper comes with a toolbox of Matlab functions that facilitates further mathematical experiments by the reader.
A fast and wellconditioned spectral method
 SIAM Review
"... Abstract. A spectral method is developed for the direct solution of linear ordinary differential equations with variable coefficients. The method leads to matrices which are almost banded, and a numerical solver is presented that takes Ø(m2n) operations, where m is the number of Chebyshev points nee ..."
Abstract

Cited by 15 (3 self)
 Add to MetaCart
(Show Context)
Abstract. A spectral method is developed for the direct solution of linear ordinary differential equations with variable coefficients. The method leads to matrices which are almost banded, and a numerical solver is presented that takes Ø(m2n) operations, where m is the number of Chebyshev points needed to resolve the coefficients of the differential operator and n is the number of Chebyshev coefficients needed to resolve the solution to the differential equation. We prove stability of the method by relating it to a diagonally preconditioned system which has a bounded condition number, in a suitable norm. For Dirichlet boundary conditions, this implies stability in the standard 2norm. An adaptive QR factorization is developed to efficiently solve the resulting linear system and automatically choose the optimal number of Chebyshev coefficients needed to represent the solution. The resulting algorithm can efficiently and reliably solve for solutions that require as many as a million unknowns.
Automatic Fréchet differentiation for the numerical solution of boundaryvalue problems
 CODEN ACMSCU. ISSN 00983500 (print), 15577295 (electronic). Kim:2012:ASS
, 2012
"... A new solver for nonlinear boundaryvalue problems (BVPs) in MATLAB is presented, based on the Chebfun software system for representing functions and operators automatically as numerical objects. The solver implements Newton’s method in function space, where instead of the usual Jacobian matrices, t ..."
Abstract

Cited by 6 (1 self)
 Add to MetaCart
A new solver for nonlinear boundaryvalue problems (BVPs) in MATLAB is presented, based on the Chebfun software system for representing functions and operators automatically as numerical objects. The solver implements Newton’s method in function space, where instead of the usual Jacobian matrices, the derivatives involved are Fréchet derivatives. A major novelty of this approach is the application of automatic differentiation (AD) techniques to compute the operatorvalued Fréchet derivatives in the continuous context. Other novelties include the use of anonymous functions and numbering of each variable to enable a recursive, delayed evaluation of derivatives with forward mode AD. The AD techniques are applied within a new Chebfun class called chebop which allows users to set up and solve nonlinear BVPs, both scalar and systems of coupled equations, in a few lines of code, using the “nonlinear backslash ” operator (\). This framework enables one to study the behaviour of Newton’s method in function space.
On The Use Of Conformal Maps To Speed Up Numerical Computations
, 2009
"... New numerical methods for quadrature, the solution of differential equations, and function approximation are proposed, each based upon the use of conformal maps to transplant existing polynomialbased methods. Wellestablished methods such as Gauss quadrature and the Fourier spectral method are alt ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
(Show Context)
New numerical methods for quadrature, the solution of differential equations, and function approximation are proposed, each based upon the use of conformal maps to transplant existing polynomialbased methods. Wellestablished methods such as Gauss quadrature and the Fourier spectral method are altered using a changeofvariable approach to exploit extra analyticity in the underlying functions and improve rates of geometric convergence. Importantly this requires only minor alterations to existing codes, and the precise theorems governing the performance of the polynomialbased methods are easily extended. The types of maps chosen to define the new methods fall into two categories, which form the two sections of this thesis. The first considers maps for ‘general ’ functions, and proposes a solution for the wellknown endpoint clustering of grids in methods based upon algebraic polynomials which can ‘waste ’ a factor of π/2 in each spatial direction. This results in quadrature methods that are provably 50 % faster that Gauss quadrature for functions analytic in an ε
The automatic solution of partial differential equations using a global spectral method, submitted
"... Abstract. A spectral method for solving linear partial differential equations (PDEs) with variable coefficients and general boundary conditions defined on rectangular domains is described, based on separable representations of partial differential operators and the onedimensional ultraspherical sp ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
(Show Context)
Abstract. A spectral method for solving linear partial differential equations (PDEs) with variable coefficients and general boundary conditions defined on rectangular domains is described, based on separable representations of partial differential operators and the onedimensional ultraspherical spectral method. If a partial differential operator is of splitting rank 2, such as the operator associated with Poisson or Helmholtz, the corresponding PDE is solved via a generalized Sylvester matrix equation, and a bivariate polynomial approximation of the solution of degree (nx, ny) is computed in O((nxny)3/2) operations. Partial differential operators of splitting rank ≥ 3 are solved via a linear system involving a blockbanded matrix in O(min(n3xny, nxn3y)) operations. Numerical examples demonstrate the applicability of our 2D spectral method to a broad class of PDEs, which includes elliptic and dispersive timeevolution equations. The resulting PDE solver is written in Matlab and is publicly available as part of Chebfun. It can resolve solutions requiring over a million degrees of freedom in under 60 seconds. An experimental implementation in the Julia language can currently perform the same solve in 10 seconds. Key words. Chebyshev, ultraspherical, partial differential equation, spectral method AMS subject classifications. 33A65, 35C11, 65N35
A wellconditioned collocation method using pseudospectral integration matrix. arXiv:1305.2041
, 2013
"... Abstract. In this paper, a wellconditioned collocation method is constructed for solving general pth order linear differential equations with various types of boundary conditions. Based on a suitable Birkhoff interpolation, we obtain a new set of polynomial basis functions that results in a collo ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
(Show Context)
Abstract. In this paper, a wellconditioned collocation method is constructed for solving general pth order linear differential equations with various types of boundary conditions. Based on a suitable Birkhoff interpolation, we obtain a new set of polynomial basis functions that results in a collocation scheme with two important features: the condition number of the linear system is independent of the number of collocation points; and the underlying boundary conditions are imposed exactly. Moreover, the new basis leads to exact inverse of the pseudospectral differentiation matrix (PSDM) of the highest derivative (at interior collocation points), which is therefore called the pseudospectral integration matrix (PSIM). We show that PSIM produces the optimal integration preconditioner, and stable collocation solutions with even thousands of points. 1.
Computing with functions in two dimensions
, 2014
"... New numerical methods are proposed for computing with smooth scalar and vector valued functions of two variables defined on rectangular domains. Functions are approximated to essentially machine precision by an iterative variant of Gaussian elimination that constructs nearoptimal low rank approxim ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
New numerical methods are proposed for computing with smooth scalar and vector valued functions of two variables defined on rectangular domains. Functions are approximated to essentially machine precision by an iterative variant of Gaussian elimination that constructs nearoptimal low rank approximations. Operations such as integration, differentiation, and function evaluation are particularly efficient. Explicit convergence rates are shown for the singular values of differentiable and separately analytic functions, and examples are given to demonstrate some paradoxical features of low rank approximation theory. Analogues of QR, LU, and Cholesky factorizations are introduced for matrices that are continuous in one or both directions, deriving a continuous linear algebra. New notions of triangular structures are proposed and the convergence of the infinite series associated with these factorizations is proved under certain smoothness assumptions.
A spatially adaptive iterative method for a class of nonlinear operator eigenproblems, (submitted to ETNA
, 2012
"... Abstract. We present a new algorithm for the iterative solution of nonlinear operator eigenvalue problems arising from partial differential equations (PDEs). This algorithm combines automatic spatial resolution of linear operators with the infinite Arnoldi method for nonlinear matrix eigenproblems p ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
(Show Context)
Abstract. We present a new algorithm for the iterative solution of nonlinear operator eigenvalue problems arising from partial differential equations (PDEs). This algorithm combines automatic spatial resolution of linear operators with the infinite Arnoldi method for nonlinear matrix eigenproblems proposed in [19]. The iterates in this infinite Arnoldi method are functions, and each iteration requires the solution of an inhomogeneous differential equation. This formulation is independent of the spatial representation of the functions, which allows us to employ a dynamic representation with an accuracy of about the level of machine precision at each iteration, similar to what is done in the Chebfun system [3] with its chebop functionality [12], although our function representation is entirely based on coefficients instead of function values. Our approach also allows for nonlinearities in the boundary conditions of the PDE. The algorithm is illustrated with several examples, e.g., the study of eigenvalues of a vibrating string with delayed boundary feedback control. 1. Introduction. PDE