Results 1  10
of
50
Splines: A Perfect Fit for Signal/Image Processing
 IEEE SIGNAL PROCESSING MAGAZINE
, 1999
"... ..."
Interpolation revisited
 IEEE Transactions on Medical Imaging
, 2000
"... Abstract—Based on the theory of approximation, this paper presents a unified analysis of interpolation and resampling techniques. An important issue is the choice of adequate basis functions. We show that, contrary to the common belief, those that perform best are not interpolating. By opposition to ..."
Abstract

Cited by 120 (23 self)
 Add to MetaCart
Abstract—Based on the theory of approximation, this paper presents a unified analysis of interpolation and resampling techniques. An important issue is the choice of adequate basis functions. We show that, contrary to the common belief, those that perform best are not interpolating. By opposition to traditional interpolation, we call their use generalized interpolation; they involve a prefiltering step when correctly applied. We explain why the approximation order inherent in any basis function is important to limit interpolation artifacts. The decomposition theorem states that any basis function endowed with approximation order can be expressed as the convolution of a Bspline of the same order with another function that has none. This motivates the use of splines and splinebased functions as a tunable way to keep artifacts in check without any significant cost penalty. We discuss implementation and performance issues, and we provide experimental evidence to support our claims. Index Terms—Approximation constant, approximation order, Bsplines, Fourier error kernel, maximal order and minimal support (Moms), piecewisepolynomials. I.
Sampling moments and reconstructing signals of finite rate of innovation: Shannon meets StrangFix
 IEEE Trans. on Signal Processing
, 2007
"... Abstract—Consider the problem of sampling signals which are not bandlimited, but still have a finite number of degrees of freedom per unit of time, such as, for example, nonuniform splines or piecewise polynomials, and call the number of degrees of freedom per unit of time the rate of innovation. Cl ..."
Abstract

Cited by 92 (28 self)
 Add to MetaCart
Abstract—Consider the problem of sampling signals which are not bandlimited, but still have a finite number of degrees of freedom per unit of time, such as, for example, nonuniform splines or piecewise polynomials, and call the number of degrees of freedom per unit of time the rate of innovation. Classical sampling theory does not enable a perfect reconstruction of such signals since they are not bandlimited. Recently, it was shown that, by using an adequate sampling kernel and a sampling rate greater or equal to the rate of innovation, it is possible to reconstruct such signals uniquely [34]. These sampling schemes, however, use kernels with infinite support, and this leads to complex and potentially unstable reconstruction algorithms. In this paper, we show that many signals with a finite rate of innovation can be sampled and perfectly reconstructed using physically realizable kernels of compact support and a local reconstruction algorithm. The class of kernels that we can use is very rich and includes functions satisfying Strang–Fix conditions, exponential splines and functions with rational Fourier transform. This last class of kernels is quite general and includes, for instance, any linear electric circuit. We, thus, show with an example how to estimate a signal of finite rate of innovation at the output of an circuit. The case of noisy measurements is also analyzed, and we present a novel algorithm that reduces the effect of noise by oversampling. Index Terms—Analogtodigital conversion, annihilating filter method, multiresolution approximations, sampling methods, splines, wavelets. I.
Quantitative Fourier Analysis of Approximation Techniques: Part II  Wavelets
 IEEE Trans. Signal Processing
, 1999
"... In a previous paper, we proposed a general Fourier method that provides an accurate prediction of the approximation error, irrespective of the scaling properties of the approximating functions. Here, we apply our results when these functions satisfy the usual twoscale relation encountered in dyadic ..."
Abstract

Cited by 67 (28 self)
 Add to MetaCart
In a previous paper, we proposed a general Fourier method that provides an accurate prediction of the approximation error, irrespective of the scaling properties of the approximating functions. Here, we apply our results when these functions satisfy the usual twoscale relation encountered in dyadic multiresolution analysis. As a consequence of this additional constraint, the quantities introduced in our previous paper can be computed explicitly as a function of the refinement filter. This is, in particular, true for the asymptotic expansion of the approximation error for biorthonormal wavelets as the scale tends to zero. One of the contributions of this paper is the computation of sharp, asymptotically optimal upper bounds for the leastsquares approximation error. Another contribution is the application of these results to Bsplines and Daubechies scaling functions, which yields explicit asymptotic developments and upper bounds. Thanks to these explicit expressions, we can quantify ...
A chronology of interpolation: From ancient astronomy to modern signal and image processing
 Proceedings of the IEEE
, 2002
"... This paper presents a chronological overview of the developments in interpolation theory, from the earliest times to the present date. It brings out the connections between the results obtained in different ages, thereby putting the techniques currently used in signal and image processing into histo ..."
Abstract

Cited by 62 (0 self)
 Add to MetaCart
This paper presents a chronological overview of the developments in interpolation theory, from the earliest times to the present date. It brings out the connections between the results obtained in different ages, thereby putting the techniques currently used in signal and image processing into historical perspective. A summary of the insights and recommendations that follow from relatively recent theoretical as well as experimental studies concludes the presentation. Keywords—Approximation, convolutionbased interpolation, history, image processing, polynomial interpolation, signal processing, splines. “It is an extremely useful thing to have knowledge of the true origins of memorable discoveries, especially those that have been found not by accident but by dint of meditation. It is not so much that thereby history may attribute to each man his own discoveries and others should be encouraged to earn like commendation, as that the art of making discoveries should be extended by considering noteworthy examples of it. ” 1 I.
Image Interpolation and Resampling
 Handbook of Medical Imaging, Processing and Analysis
, 2000
"... Abstract—This chapter presents a survey of interpolation and resampling techniques in the context of exact, separable interpolation of regularly sampled data. In this context, the traditional view of interpolation is to represent an arbitrary continuous function as a discrete sum of weighted and shi ..."
Abstract

Cited by 53 (6 self)
 Add to MetaCart
Abstract—This chapter presents a survey of interpolation and resampling techniques in the context of exact, separable interpolation of regularly sampled data. In this context, the traditional view of interpolation is to represent an arbitrary continuous function as a discrete sum of weighted and shifted synthesis functions—in other words, a mixed convolution equation. An important issue is the choice of adequate synthesis functions that satisfy interpolation properties. Examples of finitesupport ones are the square pulse (nearestneighbor interpolation), the hat function (linear interpolation), the cubic Keys' function, and various truncated or windowed versions of the sinc function. On the other hand, splines provide examples of infinitesupport interpolation functions that can be realized exactly at a finite, surprisingly small computational cost. We discuss implementation issues and illustrate the performance of each synthesis function. We also highlight several artifacts that may arise when performing interpolation, such as ringing, aliasing, blocking and blurring. We explain why the approximation order inherent in the synthesis function is important to limit these interpolation artifacts, which motivates the use of splines as a tunable way to keep them in check without any significant cost penalty. I.
MOMS: MaximalOrder Interpolation of Minimal Support
 IEEE Trans. Image Process
, 2001
"... We consider the problem of interpolating a signal using a linear combination of shifted versions of a compactlysupported basis function ( ). We first give the expression of the 's that have minimal support for a given accuracy (also known as "approximation order"). This class of func ..."
Abstract

Cited by 48 (17 self)
 Add to MetaCart
We consider the problem of interpolating a signal using a linear combination of shifted versions of a compactlysupported basis function ( ). We first give the expression of the 's that have minimal support for a given accuracy (also known as "approximation order"). This class of functions, which we call maximal orderminimalsupport functions (MOMS) is made of linear combinations of the Bspline of same order and of its derivatives.
A Generalized Sampling Theory without bandlimiting constraints
 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEM II
"... ..."
Nonideal sampling and interpolation from noisy observations in shiftinvariant spaces
 IEEE Trans. Signal Processing
, 2006
"... Abstract—Digital analysis and processing of signals inherently relies on the existence of methods for reconstructing a continuoustime signal from a sequence of corrupted discretetime samples. In this paper, a general formulation of this problem is developed that treats the interpolation problem fro ..."
Abstract

Cited by 25 (15 self)
 Add to MetaCart
Abstract—Digital analysis and processing of signals inherently relies on the existence of methods for reconstructing a continuoustime signal from a sequence of corrupted discretetime samples. In this paper, a general formulation of this problem is developed that treats the interpolation problem from ideal, noisy samples, and the deconvolution problem in which the signal is filtered prior to sampling, in a unified way. The signal reconstruction is performed in a shiftinvariant subspace spanned by the integer shifts of a generating function, where the expansion coefficients are obtained by processing the noisy samples with a digital correction filter. Several alternative approaches to designing the correction filter are suggested, which differ in their assumptions on the signal and noise. The classical deconvolution solutions (leastsquares, Tikhonov, and Wiener) are adapted to our particular situation, and new methods that are optimal in a minimax sense are also proposed. The solutions often have a similar structure and can be computed simply and efficiently by digital filtering. Some concrete examples of reconstruction filters are presented, as well as simple guidelines for selecting the free parameters (e.g., regularization) of the various algorithms. Index Terms—Deconvolution, interpolation, minimax reconstruction, sampling. I.
Mathematical properties of the JPEG2000 wavelet filters
 In
, 2003
"... to special prominence because they were selected for inclusion in the JPEG2000 standard. Here, we determine their key mathematical features: Riesz bounds, order of approximation, and regularity (Hölder and Sobolev). We give approximation theoretic quantities such as the asymptotic constant for the L ..."
Abstract

Cited by 21 (2 self)
 Add to MetaCart
to special prominence because they were selected for inclusion in the JPEG2000 standard. Here, we determine their key mathematical features: Riesz bounds, order of approximation, and regularity (Hölder and Sobolev). We give approximation theoretic quantities such as the asymptotic constant for the L 2 error and the angle between the analysis and synthesis spaces which characterizes the loss of performance with respect to an orthogonal projection. We also derive new asymptotic error formulæ that exhibit bound constants that are proportional to the magnitude of the first nonvanishing moment of the wavelet. The Daubechies 9/7 stands out because it is very close to orthonormal, but this turns out to be slightly detrimental to its asymptotic performance when compared to other wavelets with four vanishing moments. I.