Results 1  10
of
73
Fast Discrete Curvelet Transforms
, 2005
"... This paper describes two digital implementations of a new mathematical transform, namely, the second generation curvelet transform [12, 10] in two and three dimensions. The first digital transformation is based on unequallyspaced fast Fourier transforms (USFFT) while the second is based on the wrap ..."
Abstract

Cited by 113 (9 self)
 Add to MetaCart
This paper describes two digital implementations of a new mathematical transform, namely, the second generation curvelet transform [12, 10] in two and three dimensions. The first digital transformation is based on unequallyspaced fast Fourier transforms (USFFT) while the second is based on the wrapping of specially selected Fourier samples. The two implementations essentially differ by the choice of spatial grid used to translate curvelets at each scale and angle. Both digital transformations return a table of digital curvelet coefficients indexed by a scale parameter, an orientation parameter, and a spatial location parameter. And both implementations are fast in the sense that they run in O(n 2 log n) flops for n by n Cartesian arrays; in addition, they are also invertible, with rapid inversion algorithms of about the same complexity. Our digital transformations improve upon earlier implementations—based upon the first generation of curvelets—in the sense that they are conceptually simpler, faster and far less redundant. The software CurveLab, which implements both transforms presented in this paper, is available at
Fast Fourier transforms for nonequispaced data: A tutorial
, 2000
"... In this section, we consider approximative methods for the fast computation of multivariate discrete Fourier transforms for nonequispaced data (NDFT) in the time domain and in the frequency domain. In particular, we are interested in the approximation error as function of the arithmetic complexity o ..."
Abstract

Cited by 110 (32 self)
 Add to MetaCart
In this section, we consider approximative methods for the fast computation of multivariate discrete Fourier transforms for nonequispaced data (NDFT) in the time domain and in the frequency domain. In particular, we are interested in the approximation error as function of the arithmetic complexity of the algorithm. We discuss the robustness of NDFTalgorithms with respect to roundoff errors and apply NDFTalgorithms for the fast computation of Bessel transforms.
Nonuniform Fast Fourier Transforms Using MinMax Interpolation
 IEEE Trans. Signal Process
, 2003
"... The FFT is used widely in signal processing for efficient computation of the Fourier transform (FT) of finitelength signals over a set of uniformlyspaced frequency locations. However, in many applications, one requires nonuniform sampling in the frequency domain, i.e.,a nonuniform FT . Several pap ..."
Abstract

Cited by 82 (13 self)
 Add to MetaCart
The FFT is used widely in signal processing for efficient computation of the Fourier transform (FT) of finitelength signals over a set of uniformlyspaced frequency locations. However, in many applications, one requires nonuniform sampling in the frequency domain, i.e.,a nonuniform FT . Several papers have described fast approximations for the nonuniform FT based on interpolating an oversampled FFT. This paper presents an interpolation method for the nonuniform FT that is optimal in the minmax sense of minimizing the worstcase approximation error over all signals of unit norm. The proposed method easily generalizes to multidimensional signals. Numerical results show that the minmax approach provides substantially lower approximation errors than conventional interpolation methods. The minmax criterion is also useful for optimizing the parameters of interpolation kernels such as the KaiserBessel function.
One sketch for all: Fast algorithms for compressed sensing
 In Proc. 39th ACM Symp. Theory of Computing
, 2007
"... Compressed Sensing is a new paradigm for acquiring the compressible signals that arise in many applications. These signals can be approximated using an amount of information much smaller than the nominal dimension of the signal. Traditional approaches acquire the entire signal and process it to extr ..."
Abstract

Cited by 59 (11 self)
 Add to MetaCart
Compressed Sensing is a new paradigm for acquiring the compressible signals that arise in many applications. These signals can be approximated using an amount of information much smaller than the nominal dimension of the signal. Traditional approaches acquire the entire signal and process it to extract the information. The new approach acquires a small number of nonadaptive linear measurements of the signal and uses sophisticated algorithms to determine its information content. Emerging technologies can compute these general linear measurements of a signal at unit cost per measurement. This paper exhibits a randomized measurement ensemble and a signal reconstruction algorithm that satisfy four requirements: 1. The measurement ensemble succeeds for all signals, with high probability over the random choices in its construction. 2. The number of measurements of the signal is optimal, except for a factor polylogarithmic in the signal length. 3. The running time of the algorithm is polynomial in the amount of information in the signal and polylogarithmic in the signal length. 4. The recovery algorithm offers the strongest possible type of error guarantee. Moreover, it is a fully polynomial approximation scheme with respect to this type of error bound. Emerging applications demand this level of performance. Yet no other algorithm in the literature simultaneously achieves all four of these desiderata.
Fast Approximate Fourier Transforms For Irregularly Spaced Data
 SIAM Rev
, 1998
"... Several algorithms for efficiently evaluating trigonometric polynomials at irregularly spaced points are presented and analyzed. The algorithms can be viewed as approximate generalizations of the fast Fourier transform (FFT), and they are compared with regard to their accuracy and their computationa ..."
Abstract

Cited by 46 (0 self)
 Add to MetaCart
Several algorithms for efficiently evaluating trigonometric polynomials at irregularly spaced points are presented and analyzed. The algorithms can be viewed as approximate generalizations of the fast Fourier transform (FFT), and they are compared with regard to their accuracy and their computational efficiency.
Nonuniform fast Fourier transform
 Geophysics
, 1999
"... The nonuniform discrete Fourier transform (NDFT) can be computed with a fast algorithm, referred to as the nonuniform fast Fourier transform (NFFT). In L dimensions, the NFFT requires O(N(ln #) L + ( Q L #=1 M # ) P L #=1 log M # ) operations, where M # is the number of Fourier components ..."
Abstract

Cited by 43 (1 self)
 Add to MetaCart
The nonuniform discrete Fourier transform (NDFT) can be computed with a fast algorithm, referred to as the nonuniform fast Fourier transform (NFFT). In L dimensions, the NFFT requires O(N(ln #) L + ( Q L #=1 M # ) P L #=1 log M # ) operations, where M # is the number of Fourier components along dimension #, N is the number of irregularly spaced samples, and # is the required accuracy. This is a dramatic improvement over the O(N Q L #=1 M # ) operations required for the direct evaluation (NDFT). The performance of the NFFT depends on the lowpass filter used in the algorithm. A truncated Gauss pulse, proposed in the literature, is optimized. A newly proposed filter, a Gauss pulse tapered with a Hanning window, performs better than the truncated Gauss pulse and the Bspline, also proposed in the literature. For small filter length, a numerically optimized filter shows the best results. Numerical experiments for 1D and 2D implementations confirm the theoretically predicted ...
Accelerating the nonuniform Fast Fourier Transform
 SIAM REVIEW
, 2004
"... The nonequispaced Fourier transform arises in a variety of application areas, from medical imaging to radio astronomy to the numerical solution of partial differential equations. In a typical problem, one is given an irregular sampling of N data in the frequency domain and one is interested in recon ..."
Abstract

Cited by 35 (2 self)
 Add to MetaCart
The nonequispaced Fourier transform arises in a variety of application areas, from medical imaging to radio astronomy to the numerical solution of partial differential equations. In a typical problem, one is given an irregular sampling of N data in the frequency domain and one is interested in reconstructing the corresponding function in the physical domain. When the sampling is uniform, the fast Fourier transform (FFT) allows this calculation to be computed in O(N log N) operations rather than O(N 2) operations. Unfortunately, when the sampling is nonuniform, the FFT does not apply. Over the last few years, a number of algorithms have been developed to overcome this limitation and are often referred to as nonuniform FFTs (NUFFTs). These rely on a mixture of interpolation and the judicious use of the FFT on an oversampled grid [A. Dutt and V. Rokhlin, SIAM J. Sci. Comput., 14 (1993), pp. 1368–1383]. In this paper, we observe that one of the standard interpolation or “gridding ” schemes, based on Gaussians, can be accelerated by a significant factor without precomputation and storage of the interpolation weights. This is of particular value in two and threedimensional settings, saving either 10dN in storage in d dimensions or a factor of about 5–10 in CPUtime (independent of dimension).
Random sampling of multivariate trigonometric polynomials
 SIAM J. Math. Anal
, 2004
"... We investigate when a trigonometric polynomial p of degree M in d variables is uniquely determined by its sampled values p(xj) on a random set of points xj in the unit cube (the “sampling problem for trigonometric polynomials”) and estimate the probability distribution of the condition number for th ..."
Abstract

Cited by 30 (3 self)
 Add to MetaCart
We investigate when a trigonometric polynomial p of degree M in d variables is uniquely determined by its sampled values p(xj) on a random set of points xj in the unit cube (the “sampling problem for trigonometric polynomials”) and estimate the probability distribution of the condition number for the associated Vandermondetype and Toeplitzlike matrices. The results provide a solid theoretical foundation for some efficient numerical algorithms that are already in use.
Regularization Techniques for Numerical Approximation of PDEs with Singularities
 J. of Sci. Comput
, 2002
"... The rate of convergence for numerical methods approximating dierential equations are often drastically reduced from lack of regularity in the solution. Typical examples are problems with singular source terms or discontinuous material coecients. We shall discuss the technique of local regulariza ..."
Abstract

Cited by 23 (2 self)
 Add to MetaCart
The rate of convergence for numerical methods approximating dierential equations are often drastically reduced from lack of regularity in the solution. Typical examples are problems with singular source terms or discontinuous material coecients. We shall discuss the technique of local regularization for handling these problems. New numerical methods are presented and analyzed and numerical examples are given. Some serious de ciencies in existing methods are also pointed out.
Efficient Algorithms for DiffusionGenerated Motion by Mean Curvature
 J. Comput. Phys
, 1996
"... We accept this thesis as conforming to the required standard ..."
Abstract

Cited by 21 (5 self)
 Add to MetaCart
We accept this thesis as conforming to the required standard