Results 1  10
of
12
Fast Fourier transforms for nonequispaced data: A tutorial
, 2000
"... In this section, we consider approximative methods for the fast computation of multivariate discrete Fourier transforms for nonequispaced data (NDFT) in the time domain and in the frequency domain. In particular, we are interested in the approximation error as function of the arithmetic complexity o ..."
Abstract

Cited by 111 (33 self)
 Add to MetaCart
In this section, we consider approximative methods for the fast computation of multivariate discrete Fourier transforms for nonequispaced data (NDFT) in the time domain and in the frequency domain. In particular, we are interested in the approximation error as function of the arithmetic complexity of the algorithm. We discuss the robustness of NDFTalgorithms with respect to roundoff errors and apply NDFTalgorithms for the fast computation of Bessel transforms.
FFTs for the 2SphereImprovements and Variations
 JOURNAL OF FOURIER ANALYSIS AND APPLICATIONS
, 2003
"... Earlier work by Driscoll and Healy [18] has produced an efficient algorithm for computing the Fourier transform of bandlimited functions on the 2sphere. In this article we present a reformulation and variation of the original algorithm which results in a greatly improved inverse transform, and co ..."
Abstract

Cited by 104 (2 self)
 Add to MetaCart
Earlier work by Driscoll and Healy [18] has produced an efficient algorithm for computing the Fourier transform of bandlimited functions on the 2sphere. In this article we present a reformulation and variation of the original algorithm which results in a greatly improved inverse transform, and consequent improved convolution algorithm for such functions. All require at most O(N log2 N)operations where N is the number of sample points. We also address implementation considerations and give heuristics for allowing reliable and computationally efficient floating point implementations of slightly modified algorithms. These claims are supported by extensive numerical experiments from our implementation in C on DEC, HP, SGI and Linux Pentium platforms. These results indicate that variations of the algorithm are both reliable and efficient for a large range of useful problem sizes. Performance appears to be architecturedependent. The article concludes with a brief discussion of a few potential applications.
Generalized FFTs  A Survey Of Some Recent Results
, 1995
"... In this paper we survey some recent work directed towards generalizing the fast Fourier transform (FFT). We work primarily from the point of view of group representation theory. In this setting the classical FFT can be viewed as a family of efficient algorithms for computing the Fourier transform of ..."
Abstract

Cited by 51 (8 self)
 Add to MetaCart
In this paper we survey some recent work directed towards generalizing the fast Fourier transform (FFT). We work primarily from the point of view of group representation theory. In this setting the classical FFT can be viewed as a family of efficient algorithms for computing the Fourier transform of either a function defined on a finite abelian group, or a bandlimited function on a compact abelian group. We discuss generalizations of the FFT to arbitrary finite groups and compact Lie groups.
Computational Complexity and Numerical Stability
 SIAM J. Comput
, 1975
"... ABSTRACT: Limiting consideration to algorithms satisfying various numerical stability requirements may change lower bounds for computational complexity and/or make lower bounds easier to prove. We will show that, under a sufficiently strong restriction upon numerical stability, any algorithm for mul ..."
Abstract

Cited by 9 (0 self)
 Add to MetaCart
ABSTRACT: Limiting consideration to algorithms satisfying various numerical stability requirements may change lower bounds for computational complexity and/or make lower bounds easier to prove. We will show that, under a sufficiently strong restriction upon numerical stability, any algorithm for multiplying two n x n matrices using only +, and x requires at least n 3 multiplications. We conclude with a survey of results concerning the numerical stability of several algorithms which have been considered by complexity theorists. I.
On the Precision Attainable with Various FloatingPoint Number Systems
 IEEE Transactions on Computers
, 1973
"... For scientific computations on a digital computer the set of real numbers is usually approximated by a finite set F of “floatingpoint ” numbers. We compare the numerical accuracy possible with different choices of F having approximately the same range and requiring the same word length. In particul ..."
Abstract

Cited by 8 (3 self)
 Add to MetaCart
For scientific computations on a digital computer the set of real numbers is usually approximated by a finite set F of “floatingpoint ” numbers. We compare the numerical accuracy possible with different choices of F having approximately the same range and requiring the same word length. In particular, we compare different choices of base (or radix) in the usual floatingpoint systems. The emphasis is on the choice of F, not on the details of the number representation or the arithmetic, but both rounded and truncated arithmetic are considered. Theoretical results are given, and some simulations of typical floating pointcomputations (forming sums, solving systems of linear equations, finding eigenvalues) are described. If the leading fraction bit of a normalized base 2 number is not stored explicitly (saving a bit), and the criterion is to minimise the mean square roundoff error, then base 2 is best. If unnormalized numbers are allowed, so the first bit must be stored explicitly, then base 4 (or sometimes base 8) is the best of the usual systems. Index Terms: Base, floatingpoint arithmetic, radix, representation error, rms error, rounding error, simulation.
Componentwise Error Analysis For FFT's With Applications To Fast Helmholtz Solvers
 Numer. Algorithms, 12:65
, 1991
"... . We analyze the stability of the CooleyTukey algorithm for the Fast Fourier Transform of order n = 2 k and of its inverse by using componentwise error analysis. We prove that the components of the roundoff errors are linearly related to the result in exact arithmetic. We describe the structure ..."
Abstract

Cited by 5 (1 self)
 Add to MetaCart
. We analyze the stability of the CooleyTukey algorithm for the Fast Fourier Transform of order n = 2 k and of its inverse by using componentwise error analysis. We prove that the components of the roundoff errors are linearly related to the result in exact arithmetic. We describe the structure of the error matrix and we give optimal bounds for the total error in infinity norm and in L 2 norm. The theoretical upper bounds are based on a `worst case' analysis where all the rounding errors work in the same direction. We show by means of a statistical error analysis that in realistic cases the maxnorm error grows asymptotically like the logarithm of the sequence length by machine precision. Finally, we use the previous results for introducing tight upper bounds on the algorithmic error for some of the classical fast Helmholtz equation solvers based on the Fast Fourier Transform and for some algorithms used in the study of turbulence. 1. Introduction. Let F n be the Fourier matrix of ...
Numerical stability of fast trigonometric transforms  a worst case study
 J. Concrete Appl. Math
, 2003
"... This paper presents some new results on numerical stability for various fast trigonometric transforms. In a worst case study, we consider the numerical stability of the classical fast Fourier transform (FFT) with respect to different precomputation methods for the involved twiddle factors and show t ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
This paper presents some new results on numerical stability for various fast trigonometric transforms. In a worst case study, we consider the numerical stability of the classical fast Fourier transform (FFT) with respect to different precomputation methods for the involved twiddle factors and show the strong influence of precomputation errors on the numerical stability of the FFT. The examinations are extended to fast algorithms for the computation of discrete cosine and sine transforms and to efficient computations of discrete Fourier transforms for nonequispaced data. Numerical tests confirm the theoretical estimates of numerical stability.
Computational Harmonic Analysis For Tensor Fields On The TwoSphere
"... . In this paper we describe algorithms for the numerical computation of Fourier transforms of tensor elds on the twosphere, S 2 . These algorithms reduce the computation of an expansion on tensor spherical harmonics to expansions in scalar spherical harmonics, and hence can take advantage of rece ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
. In this paper we describe algorithms for the numerical computation of Fourier transforms of tensor elds on the twosphere, S 2 . These algorithms reduce the computation of an expansion on tensor spherical harmonics to expansions in scalar spherical harmonics, and hence can take advantage of recent improvements in the eciency of computation of scalar spherical harmonic transforms. 1. Introduction The calculation of Fourier expansions for vector elds, and more generally, tensor elds on the twosphere has been identied as an important computational problem in areas such as uid dynamics [22] and global circulation modeling [61]. Other applications include the analysis of cosmic microwave background radiation [74] and models of stress propagation through the earth [24]. In this paper we show how the computation of expansions in tensor spherical harmonics may be reduced to a small number of scalar spherical harmonic transforms. Over the past twenty years a large body of work has gr...
On the Precision Attainable with Various FloatingPoint Number Systems
"... 1 Introduction A real number x is usually approximated in a digital computer by an element fl(x) of a finite set F of "floatingpoint " numbers. We regard the elements of F as exactly representable real numbers, and take fl(x) as the floatingpoint number closest to x. The definition of &q ..."
Abstract
 Add to MetaCart
1 Introduction A real number x is usually approximated in a digital computer by an element fl(x) of a finite set F of "floatingpoint " numbers. We regard the elements of F as exactly representable real numbers, and take fl(x) as the floatingpoint number closest to x. The definition of "closest", rules for breaking ties, and the possibility of truncating instead of rounding are discussed later. We restrict our attention to binary computers in which floatingpoint numbers are represented in a word (or multiple word) of fixed length w bits, using some convenient (possibly redundant) code. Usually F is a set of numbers of the form
Mathematics Of Computation
 Centre, Simon Fraser University
"... We extend the work of Richard Crandall et al. to demonstrate how the Discrete Weighted Transform (DWT) can be applied to speed up multiplication modulo any number of the form a b where pab p is small. In particular this allows rapid computation modulo numbers of the form k 1. ..."
Abstract
 Add to MetaCart
We extend the work of Richard Crandall et al. to demonstrate how the Discrete Weighted Transform (DWT) can be applied to speed up multiplication modulo any number of the form a b where pab p is small. In particular this allows rapid computation modulo numbers of the form k 1.