Results 1  10
of
35
Fast Discrete Curvelet Transforms
, 2005
"... This paper describes two digital implementations of a new mathematical transform, namely, the second generation curvelet transform [12, 10] in two and three dimensions. The first digital transformation is based on unequallyspaced fast Fourier transforms (USFFT) while the second is based on the wrap ..."
Abstract

Cited by 123 (9 self)
 Add to MetaCart
This paper describes two digital implementations of a new mathematical transform, namely, the second generation curvelet transform [12, 10] in two and three dimensions. The first digital transformation is based on unequallyspaced fast Fourier transforms (USFFT) while the second is based on the wrapping of specially selected Fourier samples. The two implementations essentially differ by the choice of spatial grid used to translate curvelets at each scale and angle. Both digital transformations return a table of digital curvelet coefficients indexed by a scale parameter, an orientation parameter, and a spatial location parameter. And both implementations are fast in the sense that they run in O(n 2 log n) flops for n by n Cartesian arrays; in addition, they are also invertible, with rapid inversion algorithms of about the same complexity. Our digital transformations improve upon earlier implementations—based upon the first generation of curvelets—in the sense that they are conceptually simpler, faster and far less redundant. The software CurveLab, which implements both transforms presented in this paper, is available at
Fast Fourier transforms for nonequispaced data: A tutorial
, 2000
"... In this section, we consider approximative methods for the fast computation of multivariate discrete Fourier transforms for nonequispaced data (NDFT) in the time domain and in the frequency domain. In particular, we are interested in the approximation error as function of the arithmetic complexity o ..."
Abstract

Cited by 116 (32 self)
 Add to MetaCart
In this section, we consider approximative methods for the fast computation of multivariate discrete Fourier transforms for nonequispaced data (NDFT) in the time domain and in the frequency domain. In particular, we are interested in the approximation error as function of the arithmetic complexity of the algorithm. We discuss the robustness of NDFTalgorithms with respect to roundoff errors and apply NDFTalgorithms for the fast computation of Bessel transforms.
Nonuniform Fast Fourier Transforms Using MinMax Interpolation
 IEEE Trans. Signal Process
, 2003
"... The FFT is used widely in signal processing for efficient computation of the Fourier transform (FT) of finitelength signals over a set of uniformlyspaced frequency locations. However, in many applications, one requires nonuniform sampling in the frequency domain, i.e.,a nonuniform FT . Several pap ..."
Abstract

Cited by 88 (15 self)
 Add to MetaCart
The FFT is used widely in signal processing for efficient computation of the Fourier transform (FT) of finitelength signals over a set of uniformlyspaced frequency locations. However, in many applications, one requires nonuniform sampling in the frequency domain, i.e.,a nonuniform FT . Several papers have described fast approximations for the nonuniform FT based on interpolating an oversampled FFT. This paper presents an interpolation method for the nonuniform FT that is optimal in the minmax sense of minimizing the worstcase approximation error over all signals of unit norm. The proposed method easily generalizes to multidimensional signals. Numerical results show that the minmax approach provides substantially lower approximation errors than conventional interpolation methods. The minmax criterion is also useful for optimizing the parameters of interpolation kernels such as the KaiserBessel function.
Accelerating the nonuniform Fast Fourier Transform
 SIAM REVIEW
, 2004
"... The nonequispaced Fourier transform arises in a variety of application areas, from medical imaging to radio astronomy to the numerical solution of partial differential equations. In a typical problem, one is given an irregular sampling of N data in the frequency domain and one is interested in recon ..."
Abstract

Cited by 35 (2 self)
 Add to MetaCart
The nonequispaced Fourier transform arises in a variety of application areas, from medical imaging to radio astronomy to the numerical solution of partial differential equations. In a typical problem, one is given an irregular sampling of N data in the frequency domain and one is interested in reconstructing the corresponding function in the physical domain. When the sampling is uniform, the fast Fourier transform (FFT) allows this calculation to be computed in O(N log N) operations rather than O(N 2) operations. Unfortunately, when the sampling is nonuniform, the FFT does not apply. Over the last few years, a number of algorithms have been developed to overcome this limitation and are often referred to as nonuniform FFTs (NUFFTs). These rely on a mixture of interpolation and the judicious use of the FFT on an oversampled grid [A. Dutt and V. Rokhlin, SIAM J. Sci. Comput., 14 (1993), pp. 1368–1383]. In this paper, we observe that one of the standard interpolation or “gridding ” schemes, based on Gaussians, can be accelerated by a significant factor without precomputation and storage of the interpolation weights. This is of particular value in two and threedimensional settings, saving either 10dN in storage in d dimensions or a factor of about 5–10 in CPUtime (independent of dimension).
Fast and accurate Polar Fourier transform
 Appl. Comput. Harmon. Anal.
, 2006
"... In a wide range of applied problems of 2D and 3D imaging a continuous formulation of the problem places great emphasis on obtaining and manipulating the Fourier transform in Polar coordinates. However, the translation of continuum ideas into practical work with data sampled on a Cartesian grid is pr ..."
Abstract

Cited by 18 (1 self)
 Add to MetaCart
In a wide range of applied problems of 2D and 3D imaging a continuous formulation of the problem places great emphasis on obtaining and manipulating the Fourier transform in Polar coordinates. However, the translation of continuum ideas into practical work with data sampled on a Cartesian grid is problematic. In this article we develop a fast high accuracy Polar FFT. For a given twodimensional signal of size N × N, the proposed algorithm’s complexity is O(N^2 log N), just like in a Cartesian 2DFFT. A special feature of our approach is that it involves only 1D equispaced FFT’s and 1D interpolations. A central tool in our method is the pseudoPolar FFT, an FFT where the evaluation frequencies lie in an oversampled set of nonangularly equispaced points. We describe the concept of pseudoPolar domain, including fast forward and inverse transforms. For those interested primarily in Polar FFT’s, the pseudoPolar FFT plays the role of a halfway point—a nearlyPolar system from which conversion to Polar coordinates uses processes relying purely on 1D FFT’s and interpolation operations. We describe the conversion process, and give an error analysis of it. We compare accuracy results obtained by a Cartesianbased unequallysampled FFT method to ours, both algorithms using a smallsupport interpolation and no precompensating, and show marked advantage to the use of the pseudoPolar initial grid.
Combinatorial sublineartime fourier algorithms,” Submitted. Available at http://www.ima.umn.edu/∼iwen/index.html
, 2008
"... We study the problem of estimating the best k term Fourier representation for a given frequencysparse signal (i.e., vector) A of length N ≫ k. More explicitly, we investigate how to deterministically identify k of the largest magnitude frequencies of Â, and estimate their coefficients, in polynomia ..."
Abstract

Cited by 16 (5 self)
 Add to MetaCart
We study the problem of estimating the best k term Fourier representation for a given frequencysparse signal (i.e., vector) A of length N ≫ k. More explicitly, we investigate how to deterministically identify k of the largest magnitude frequencies of Â, and estimate their coefficients, in polynomial(k, log N) time. Randomized sublinear time algorithms which have a small (controllable) probability of failure for each processed signal exist for solving this problem [24, 25]. In this paper we develop the first known deterministic sublinear time sparse Fourier Transform algorithm which is guaranteed to produce accurate results. As an added bonus, a simple relaxation of our deterministic Fourier result leads to a new Monte Carlo Fourier algorithm with similar runtime/sampling bounds to the current best randomized Fourier method [25]. Finally, the Fourier algorithm we develop here implies a simpler optimized version of the deterministic compressed sensing method previously developed in [30]. 1
A deterministic sublinear time sparse fourier algorithm via nonadaptive compressed sensing methods
 in Proceedings of the 19th Symposium on Discrete Algorithms (SODA
, 2008
"... We study the problem of estimating the best B term Fourier representation for a given frequencysparse signal (i.e., vector) A of length N≫B. More explicitly, we investigate how to deterministically identify B of the largest magnitude frequencies of Â, and estimate their coefficients, in polynomial( ..."
Abstract

Cited by 15 (5 self)
 Add to MetaCart
We study the problem of estimating the best B term Fourier representation for a given frequencysparse signal (i.e., vector) A of length N≫B. More explicitly, we investigate how to deterministically identify B of the largest magnitude frequencies of Â, and estimate their coefficients, in polynomial(B, log N) time. Randomized sublinear time algorithms which have a small (controllable) probability of failure for each processed signal exist for solving this problem. However, for failure intolerant applications such as those involving missioncritical hardware designed to process many signals over a long lifetime, deterministic algorithms with no probability of failure are highly desirable. In this paper we build on the deterministic Compressed Sensing results of Cormode and Muthukrishnan (CM) [26, 6, 7] in order to develop the first known deterministic sublinear time sparse Fourier Transform algorithm suitable for failure intolerant applications. Furthermore, in the process of developing our new Fourier algorithm, we present a simplified deterministic Compressed Sensing algorithm which improves on CM’s algebraic compressibility results while simultaneously maintaining their results concerning exponential decay. 1
FAST COMPUTATION OF FOURIER INTEGRAL OPERATORS
, 2007
"... We introduce a general purpose algorithm for rapidly computing certain types of oscillatory integrals which frequently arise in problems connected to wave propagation, general hyperbolic equations, and curvilinear tomography. The problem is to numerically evaluate a socalled Fourier integral operat ..."
Abstract

Cited by 14 (6 self)
 Add to MetaCart
We introduce a general purpose algorithm for rapidly computing certain types of oscillatory integrals which frequently arise in problems connected to wave propagation, general hyperbolic equations, and curvilinear tomography. The problem is to numerically evaluate a socalled Fourier integral operator (FIO) of the form ∫ e2πiΦ(x,ξ) a(x, ξ) ˆ f(ξ)dξ at points given on a Cartesian grid. Here, ξ is a frequency variable, ˆ f(ξ) is the Fourier transform of the input f, a(x, ξ) isan amplitude, and Φ(x, ξ) is a phase function, which is typically as large as ξ; hence the integral is highly oscillatory. Because a FIO is a dense matrix, a naive matrix vector product with an input given on a Cartesian grid of size N by N would require O(N 4) operations. This paper develops a new numerical algorithm which requires O(N 2.5 log N) operations and as low as O ( √ N) in storage space (the constants in front of these estimates are small). It operates by localizing the integral over polar wedges with small angular aperture in the frequency plane. On each wedge, the algorithm factorizes the kernel e2πiΦ(x,ξ) a(x, ξ) into two components: (1) a diffeomorphism which is handled by means of a nonuniform FFT and (2) a residual factor which is handled by numerical separation of the spatial and frequency variables. The key to the complexity and accuracy estimates is the fact that the separation rank of the residual kernel is provably independent of the problem size. Several numerical examples demonstrate the numerical accuracy and low computational complexity of the proposed methodology. We also discuss the potential of our ideas for various applications such as reflection seismology.
Fast errorbounded surfaces and derivatives computation for volumetric particle data
, 2005
"... Volumetric smooth particle data arise as atomic coordinates with electron density kernels for molecular structures, as well as fluid particle coordinates with a smoothing kernel in hydrodynamic flow simulations. In each case there is the need for efficiently computing approximations of relevant surf ..."
Abstract

Cited by 12 (5 self)
 Add to MetaCart
Volumetric smooth particle data arise as atomic coordinates with electron density kernels for molecular structures, as well as fluid particle coordinates with a smoothing kernel in hydrodynamic flow simulations. In each case there is the need for efficiently computing approximations of relevant surfaces (molecular surfaces, material interfaces, shock waves, etc), along with surface and volume derivatives (normals, curvatures, etc.), from the irregularly spaced smooth particles. Additionally, molecular properties (charge density, polar potentials), as well as field variables from numerical simulations are often evaluated on these computed surfaces. In this paper we show how all the above problems can be reduced to a fast summation of irregularly spaced smooth kernel functions. For a scattered smooth particle system of M smooth kernels in R 3, where the Fourier coefficients have a decay of the type 1/ω 3, we present an O(M + n 3 log n + N) time, Fourier based algorithm to compute N approximate, irregular samples of a level set surface and its derivatives within a relative L2 error norm ǫ, where n is O(M 1/3 ǫ 1/3). Specifically, a truncated Gaussian of the form e −bx2 has the above decay, and n grows as √ b. In the case when the N output points are samples on a uniform grid, the back transform can be done exactly using a Fast Fourier transform algorithm, giving us an algorithm with O(M + n 3 log n + N log N) time complexity, where n is now approximately half its previously estimated value.