Results 1  10
of
109
Sampling—50 years after Shannon
 Proceedings of the IEEE
, 2000
"... This paper presents an account of the current state of sampling, 50 years after Shannon’s formulation of the sampling theorem. The emphasis is on regular sampling, where the grid is uniform. This topic has benefited from a strong research revival during the past few years, thanks in part to the math ..."
Abstract

Cited by 207 (22 self)
 Add to MetaCart
This paper presents an account of the current state of sampling, 50 years after Shannon’s formulation of the sampling theorem. The emphasis is on regular sampling, where the grid is uniform. This topic has benefited from a strong research revival during the past few years, thanks in part to the mathematical connections that were made with wavelet theory. To introduce the reader to the modern, Hilbertspace formulation, we reinterpret Shannon’s sampling procedure as an orthogonal projection onto the subspace of bandlimited functions. We then extend the standard sampling paradigm for a representation of functions in the more general class of “shiftinvariant” functions spaces, including splines and wavelets. Practically, this allows for simpler—and possibly more realistic—interpolation models, which can be used in conjunction with a much wider class of (antialiasing) prefilters that are not necessarily ideal lowpass. We summarize and discuss the results available for the determination of the approximation error and of the sampling rate when the input of the system is essentially arbitrary; e.g., nonbandlimited. We also review variations of sampling that can be understood from the same unifying perspective. These include wavelets, multiwavelets, Papoulis generalized sampling, finite elements, and frames. Irregular sampling and radial basis functions are briefly mentioned. Keywords—Bandlimited functions, Hilbert spaces, interpolation, least squares approximation, projection operators, sampling,
Framelets: MRABased Constructions of Wavelet Frames
, 2001
"... We discuss wavelet frames constructed via multiresolution analysis (MRA), with emphasis on tight wavelet frames. In particular, we establish general principles and specific algorithms for constructing framelets and tight framelets, and we show how they can be used for systematic constructions of spl ..."
Abstract

Cited by 129 (50 self)
 Add to MetaCart
We discuss wavelet frames constructed via multiresolution analysis (MRA), with emphasis on tight wavelet frames. In particular, we establish general principles and specific algorithms for constructing framelets and tight framelets, and we show how they can be used for systematic constructions of spline, pseudospline tight frames and symmetric biframes with short supports and high approximation orders. Several explicit examples are discussed. The connection of these frames with multiresolution analysis guarantees the existence of fast implementation algorithms, which we discuss briefly as well.
Wavelet transforms versus Fourier transforms
 Department of Mathematics, MIT, Cambridge MA
, 213
"... Abstract. This note is a very basic introduction to wavelets. It starts with an orthogonal basis of piecewise constant functions, constructed by dilation and translation. The "wavelet transform " maps each f(x) to its coefficients with respect to this basis. The mathematics is simple and the transfo ..."
Abstract

Cited by 71 (2 self)
 Add to MetaCart
Abstract. This note is a very basic introduction to wavelets. It starts with an orthogonal basis of piecewise constant functions, constructed by dilation and translation. The "wavelet transform " maps each f(x) to its coefficients with respect to this basis. The mathematics is simple and the transform is fast (faster than the Fast Fourier Transform, which we briefly explain), but approximation by piecewise constants is poor. To improve this first wavelet, we are led to dilation equations and their unusual solutions. Higherorder wavelets are constructed, and it is surprisingly quick to compute with them — always indirectly and recursively. We comment informally on the contest between these transforms in signal processing, especially for video and image compression (including highdefinition television). So far the Fourier Transform — or its 8 by 8 windowed version, the Discrete Cosine Transform — is often chosen. But wavelets are already competitive, and they are ahead for fingerprints. We present a sample of this developing theory. 1. The Haar wavelet To explain wavelets we start with an example. It has every property we hope for, except one. If that one defect is accepted, the construction is simple and the computations are fast. By trying to remove the defect, we are led to dilation equations and recursively defined functions and a small world of fascinating new problems — many still unsolved. A sensible person would stop after the first wavelet, but fortunately mathematics goes on. The basic example is easier to draw than to describe: W(x)
Short Wavelets and Matrix Dilation Equations
, 1995
"... Scaling functions and orthogonal wavelets are created from the coefficients of a lowpass and highpass filter (in a twoband orthogonal filter bank). For "multifilters" those coefficients are matrices. This gives a new block structure for the filter bank, and leads to multiple scaling functions and w ..."
Abstract

Cited by 69 (10 self)
 Add to MetaCart
Scaling functions and orthogonal wavelets are created from the coefficients of a lowpass and highpass filter (in a twoband orthogonal filter bank). For "multifilters" those coefficients are matrices. This gives a new block structure for the filter bank, and leads to multiple scaling functions and wavelets. Geronimo, Hardin, and Massopust constructed two scaling functions that have extra properties not previously achieved. The functions \Phi 1 and \Phi 2 are symmetric (linear phase) and they have short support (two intervals or less), while their translates form an orthogonal family. For any single function \Phi, apart from Haar's piecewise constants, those extra properties are known to be impossible. The novelty is to introduce 2 by 2 matrix coefficients while retaining orthogonality. This note derives the properties of \Phi 1 and \Phi 2 from the matrix dilation equation that they satisfy. Then our main step is to construct associated wavelets: two wavelets for two scaling functions....
Approximation By Translates Of Refinable Functions
, 1996
"... . The functions f 1 (x); : : : ; fr (x) are refinable if they are combinations of the rescaled and translated functions f i (2x \Gamma k). This is very common in scientific computing on a regular mesh. The space V 0 of approximating functions with meshwidth h = 1 is a subspace of V 1 with meshwidth ..."
Abstract

Cited by 69 (14 self)
 Add to MetaCart
. The functions f 1 (x); : : : ; fr (x) are refinable if they are combinations of the rescaled and translated functions f i (2x \Gamma k). This is very common in scientific computing on a regular mesh. The space V 0 of approximating functions with meshwidth h = 1 is a subspace of V 1 with meshwidth h = 1=2. These refinable spaces have refinable basis functions. The accuracy of the computations depends on p, the order of approximation, which is determined by the degree of polynomials 1; x; : : : ; x p\Gamma1 that lie in V 0 . Most refinable functions (such as scaling functions in the theory of wavelets) have no simple formulas. The functions f i (x) are known only through the coefficients c k in the refinement equationscalars in the traditional case, r \Theta r matrices for multiwavelets. The scalar "sum rules" that determine p are well known. We find the conditions on the matrices c k that yield approximation of order p from V 0 . These are equivalent to the StrangFix condition...
Refinable Function Vectors
 SIAM J. Math. Anal
"... Refinable function vectors are usually given in the form of an infinite product of their refinement (matrix) masks in the frequency domain and approximated by a cascade algorithm in both time and frequency domains. We provide necessary and sufficient conditions for the convergence of the cascade alg ..."
Abstract

Cited by 64 (7 self)
 Add to MetaCart
Refinable function vectors are usually given in the form of an infinite product of their refinement (matrix) masks in the frequency domain and approximated by a cascade algorithm in both time and frequency domains. We provide necessary and sufficient conditions for the convergence of the cascade algorithm. We also give necessary and sufficient conditions for the stability and orthonormality of refinable function vectors in terms of their refinement matrix masks. Regularity of function vectors gives smoothness orders in the time domain, and decay rates at infinity in the frequency domain. Regularity criteria are established in terms of the vanishing moment order of the matrix mask.
The application of multiwavelet filter banks to image processing
 IEEE Trans. Image Process
, 1999
"... Multiwavelets are a new addition to the body of wavelet theory. Realizable as matrixvalued filter banks leading to wavelet bases, multiwavelets offer simultaneous orthogonality, symmetry, and short support, which is not possible with scalar 2channel wavelet systems. After reviewing this recently d ..."
Abstract

Cited by 59 (5 self)
 Add to MetaCart
Multiwavelets are a new addition to the body of wavelet theory. Realizable as matrixvalued filter banks leading to wavelet bases, multiwavelets offer simultaneous orthogonality, symmetry, and short support, which is not possible with scalar 2channel wavelet systems. After reviewing this recently developed theory, we examine the use of multiwavelets in a filter bank setting for discretetime signal and image processing. Multiwavelets differ from scalar wavelet systems in requiring two or more input streams to the multiwavelet filter bank. We describe two methods (repeated row and approximation/deapproximation) for obtaining such a vector input stream from a onedimensional signal. Algorithms for symmetric extension of signals at boundaries are then developed, and naturally integrated with approximationbased preprocessing. We describe an additional algorithm for multiwavelet processing of twodimensional signals, two rows at a time, and develop a new family of multiwavelets (the constrained pairs) that is wellsuited to this approach. This suite of novel techniques is then applied to two basic signal processing problems, denoising via waveletshrinkage, and data compression. After developing the approach via model problems in one dimension, we applied multiwavelet processing to images, frequently obtaining performance superior to the comparable scalar wavelet transform.
Matrix Refinement Equations: Existence and Uniqueness
 J. Fourier Anal. Appl
, 1996
"... . Matrix refinement equations are functional equations of the form f(x) = P N k=0 c k f(2x \Gamma k), where the coefficients c k are matrices and f is a vectorvalued function. Refinement equations play key roles in wavelet theory and approximation theory. Existence and uniqueness properties of sca ..."
Abstract

Cited by 51 (3 self)
 Add to MetaCart
. Matrix refinement equations are functional equations of the form f(x) = P N k=0 c k f(2x \Gamma k), where the coefficients c k are matrices and f is a vectorvalued function. Refinement equations play key roles in wavelet theory and approximation theory. Existence and uniqueness properties of scalar refinement equations (where the coefficients c k are scalars) are known. This paper considers analogous questions for matrix refinement equations. Conditions for existence and uniqueness of compactly supported distributional solutions are given in terms of the convergence properties of an infinite product of the matrix \Delta = 1 2 P c k with itself. Fundamental differences between solutions of matrix equations and scalar refinement equations are examined. In particular, it is shown that "constrained" solutions of the matrix refinement equation can exist even when the infinite product diverges. The existence of constrained solutions is related to the eigenvalue structure of \Delta; so...
Compressed Sensing of Analog Signals in ShiftInvariant Spaces
, 2009
"... A traditional assumption underlying most data converters is that the signal should be sampled at a rate exceeding twice the highest frequency. This statement is based on a worstcase scenario in which the signal occupies the entire available bandwidth. In practice, many signals are sparse so that on ..."
Abstract

Cited by 50 (33 self)
 Add to MetaCart
A traditional assumption underlying most data converters is that the signal should be sampled at a rate exceeding twice the highest frequency. This statement is based on a worstcase scenario in which the signal occupies the entire available bandwidth. In practice, many signals are sparse so that only part of the bandwidth is used. In this paper, we develop methods for lowrate sampling of continuoustime sparse signals in shiftinvariant (SI) spaces, generated by m kernels with period T. We model sparsity by treating the case in which only k out of the m generators are active, however, we do not know which k are chosen. We show how to sample such signals at a rate much lower than m/T, which is the minimal sampling rate without exploiting sparsity. Our approach combines ideas from analog sampling in a subspace with a recently developed block diagram that converts an infinite set of sparse equations to a finite counterpart. Using these two components we formulate our problem within the framework of finite compressed sensing (CS) and then rely on algorithms developed in that context. The distinguishing feature of our results is that in contrast to standard CS, which treats finitelength vectors, we consider sampling of analog signals for which no underlying finitedimensional model exists. The proposed framework allows to extend much of the recent literature on CS to the analog domain.