Results 11  20
of
132
On linear independence of integer translates of a finite number of functions
 Proc. Edinburgh Math. Soc
, 1992
"... We investigate linear independence of integer translates of a finite number of compactly supported functions in two cases. In the first case there are no restrictions on the coefficients that may occur in dependence relations. In the second case the coefficient sequences are restricted to be in som ..."
Abstract

Cited by 77 (32 self)
 Add to MetaCart
We investigate linear independence of integer translates of a finite number of compactly supported functions in two cases. In the first case there are no restrictions on the coefficients that may occur in dependence relations. In the second case the coefficient sequences are restricted to be in some ℓ p space (1 ≤ p ≤ ∞) and we are interested in bounding their ℓ pnorms in terms of the L pnorm of the linear combination of integer translates of the basis functions which uses these coefficients. In both cases we give necessary and sufficient conditions for linear independence of integer translates of the basis functions. Our characterization is based on a study of certain systems of linear partial difference and differential equations, which are of independent interest.
Bspline snakes: a flexible tool for parametric contour detection
 IEEE Transactions on Image Processing
"... Abstract—We present a novel formulation for Bspline snakes that can be used as a tool for fast and intuitive contour outlining. We start with a theoretical argument in favor of splines in the traditional formulation by showing that the optimal, curvatureconstrained snake is a cubic spline, irrespe ..."
Abstract

Cited by 61 (13 self)
 Add to MetaCart
Abstract—We present a novel formulation for Bspline snakes that can be used as a tool for fast and intuitive contour outlining. We start with a theoretical argument in favor of splines in the traditional formulation by showing that the optimal, curvatureconstrained snake is a cubic spline, irrespective of the form of the external energy field. Unfortunately, such regularized snakes suffer from slow convergence speed because of a large number of control points, as well as from difficulties in determining the weight factors associated to the internal energies of the curve. We therefore propose an alternative formulation in which the intrinsic scale of the spline model is adjusted a priori; this leads to a reduction of the number of parameters to be optimized and eliminates the need for internal energies (i.e., the regularization term). In other words, we are now controlling the elasticity of the spline implicitly and rather intuitively by varying the spacing between the spline knots. The theory is embedded into a multiresolution formulation demonstrating improved stability in noisy image environments. Validation results are presented, comparing the traditional snake using internal energies and the proposed approach without internal energies, showing the similar performance of the latter. Several biomedical examples of applications are included to illustrate the versatility of the method. I.
A chronology of interpolation: From ancient astronomy to modern signal and image processing
 Proceedings of the IEEE
, 2002
"... This paper presents a chronological overview of the developments in interpolation theory, from the earliest times to the present date. It brings out the connections between the results obtained in different ages, thereby putting the techniques currently used in signal and image processing into histo ..."
Abstract

Cited by 61 (0 self)
 Add to MetaCart
This paper presents a chronological overview of the developments in interpolation theory, from the earliest times to the present date. It brings out the connections between the results obtained in different ages, thereby putting the techniques currently used in signal and image processing into historical perspective. A summary of the insights and recommendations that follow from relatively recent theoretical as well as experimental studies concludes the presentation. Keywords—Approximation, convolutionbased interpolation, history, image processing, polynomial interpolation, signal processing, splines. “It is an extremely useful thing to have knowledge of the true origins of memorable discoveries, especially those that have been found not by accident but by dint of meditation. It is not so much that thereby history may attribute to each man his own discoveries and others should be encouraged to earn like commendation, as that the art of making discoveries should be extended by considering noteworthy examples of it. ” 1 I.
Approximation Order Provided by Refinable Function Vectors
 CONSTR. APPROX.
, 1995
"... In this paper, we consider Lp{approximation byinteger translates of a finite set of functions ( =0�:::�r; 1) which are not necessarily compactly supported, but have a suitable decay rate. Assuming that the function vector = ( ) r;1 =0 is refinable, necessary and sufficient conditions for the refinem ..."
Abstract

Cited by 55 (6 self)
 Add to MetaCart
In this paper, we consider Lp{approximation byinteger translates of a finite set of functions ( =0�:::�r; 1) which are not necessarily compactly supported, but have a suitable decay rate. Assuming that the function vector = ( ) r;1 =0 is refinable, necessary and sufficient conditions for the refinement mask are derived. In particular, if algebraic polynomials can be exactly reproduced by integer translates of, then a factorization of the refinement mask of can be given. This result is a natural generalization of the result for a single function, where the refinement mask
Image Interpolation and Resampling
 Handbook of Medical Imaging, Processing and Analysis
, 2000
"... Abstract—This chapter presents a survey of interpolation and resampling techniques in the context of exact, separable interpolation of regularly sampled data. In this context, the traditional view of interpolation is to represent an arbitrary continuous function as a discrete sum of weighted and shi ..."
Abstract

Cited by 53 (6 self)
 Add to MetaCart
Abstract—This chapter presents a survey of interpolation and resampling techniques in the context of exact, separable interpolation of regularly sampled data. In this context, the traditional view of interpolation is to represent an arbitrary continuous function as a discrete sum of weighted and shifted synthesis functions—in other words, a mixed convolution equation. An important issue is the choice of adequate synthesis functions that satisfy interpolation properties. Examples of finitesupport ones are the square pulse (nearestneighbor interpolation), the hat function (linear interpolation), the cubic Keys' function, and various truncated or windowed versions of the sinc function. On the other hand, splines provide examples of infinitesupport interpolation functions that can be realized exactly at a finite, surprisingly small computational cost. We discuss implementation issues and illustrate the performance of each synthesis function. We also highlight several artifacts that may arise when performing interpolation, such as ringing, aliasing, blocking and blurring. We explain why the approximation order inherent in the synthesis function is important to limit these interpolation artifacts, which motivates the use of splines as a tunable way to keep them in check without any significant cost penalty. I.
Curves and Surfaces for CAGD
, 1993
"... This article provides a historical account of the major developments in the area of curves and surfaces as they entered the area of CAGD – Computer Aided Geometric Design – until the middle 1980s. We adopt the definition that CAGD deals with the construction and representation of freeform curves, s ..."
Abstract

Cited by 51 (0 self)
 Add to MetaCart
This article provides a historical account of the major developments in the area of curves and surfaces as they entered the area of CAGD – Computer Aided Geometric Design – until the middle 1980s. We adopt the definition that CAGD deals with the construction and representation of freeform curves, surfaces, or volumes. 1.
A unified framework for Regularization Networks and Support Vector Machines
, 1999
"... This report describers research done at the Center for Biological & Computational Learning and the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. This research was sponsored by theN ational Science Foundation under contractN o. IIS9800032, the O#ce ofN aval Researc ..."
Abstract

Cited by 50 (13 self)
 Add to MetaCart
This report describers research done at the Center for Biological & Computational Learning and the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. This research was sponsored by theN ational Science Foundation under contractN o. IIS9800032, the O#ce ofN aval Research under contractN o.N 0001493 10385 and contractN o.N 000149510600. Partial support was also provided by DaimlerBenz AG, Eastman Kodak, Siemens Corporate Research, Inc., ATR and AT&T. Contents Introductic 3 2 OverviF of stati.48EF learni4 theory 5 2.1 Unifo6 Co vergence and the VapnikChervo nenkis bo und ............. 7 2.2 The metho d o Structural Risk Minimizatio ..................... 10 2.3 #unifo8 co vergence and the V # ..................... 10 2.4 Overviewo fo urappro6 h ............................... 13 3 Reproduci9 Kernel HiT ert Spaces: a briL overviE 14 4RegulariEqq.L Networks 16 4.1 Radial Basis Functio8 ................................. 19 4.2 Regularizatioz generalized splines and kernel smo oxy rs .............. 20 4.3 Dual representatio o f Regularizatio Netwo rks ................... 21 4.4 Fro regressioto 5 Support vector machiT9 22 5.1 SVMin RKHS ..................................... 22 5.2 Fro regressioto 6SRMforRNsandSVMs 26 6.1 SRMfo SVMClassificatio .............................. 28 6.1.1 Distributio dependent bo undsfo SVMC .................. 29 7 A BayesiL Interpretatiq ofRegulariTFqEL and SRM? 30 7.1 Maximum A Po terio6 Interpretatio o f ............... 30 7.2 Bayesian interpretatio o f the stabilizer in the RN andSVMfunctio6I6 ...... 32 7.3 Bayesian interpretatio o f the data term in the Regularizatio andSVMfunctioy8 33 7.4 Why a MAP interpretatio may be misleading .................... 33 Connectine between SVMs and Sparse Ap...
Approximation error for quasiinterpolators and (multi)wavelet expansions
 APPL. COMPUT. HARMON. ANAL
, 1999
"... We investigate the approximation properties of general polynomial preserving operators that approximate a function into some scaled subspace of L² via an appropriate sequence of inner products. In particular, we consider integer shiftinvariant approximations such as those provided by splines and wa ..."
Abstract

Cited by 48 (19 self)
 Add to MetaCart
We investigate the approximation properties of general polynomial preserving operators that approximate a function into some scaled subspace of L² via an appropriate sequence of inner products. In particular, we consider integer shiftinvariant approximations such as those provided by splines and wavelets, as well as finite elements and multiwavelets which use multiple generators. We estimate the approximation error as a function of the scale parameter T when the function to approximate is sufficiently regular. We then present a generalized sampling theorem, a result that is rich enough to provide tight bounds as well as asymptotic expansions of the approximation error as a function of the sampling step T. Another more theoretical consequence is the proof of a conjecture by Strang and Fix, which states the equivalence between the order of a multiwavelet space and the order of a particular subspace generated by a single function. Finally, we consider refinable generating functions and use the twoscale relation to obtain explicit formulae for the coefficients of the asymptotic development of the error. The leading constants are easily computable and can be the basis for the comparison of the approximation power of wavelet and multiwavelet expansions of a given order L.
Subdivision schemes in Lp spaces
 Adv. Comput. Math
, 1995
"... Subdivision schemes play an important role in computer graphics and wavelet analysis. In this paper we are mainly concerned with convergence of subdivision schemes in Lp spaces (1 ≤ p ≤ ∞). We characterize the Lpconvergence of a subdivision scheme in terms of the pnorm joint spectral radius of two ..."
Abstract

Cited by 47 (21 self)
 Add to MetaCart
Subdivision schemes play an important role in computer graphics and wavelet analysis. In this paper we are mainly concerned with convergence of subdivision schemes in Lp spaces (1 ≤ p ≤ ∞). We characterize the Lpconvergence of a subdivision scheme in terms of the pnorm joint spectral radius of two matrices associated with the corresponding mask. We also discuss various properties of the limit function of a subdivision scheme, such as stability, linear independence, and smoothness.
Wavelet theory demystified
 IEEE Trans. Signal Process
, 2003
"... Abstract—In this paper, we revisit wavelet theory starting from the representation of a scaling function as the convolution of a Bspline (the regular part of it) and a distribution (the irregular or residual part). This formulation leads to some new insights on wavelets and makes it possible to red ..."
Abstract

Cited by 45 (22 self)
 Add to MetaCart
Abstract—In this paper, we revisit wavelet theory starting from the representation of a scaling function as the convolution of a Bspline (the regular part of it) and a distribution (the irregular or residual part). This formulation leads to some new insights on wavelets and makes it possible to rederive the main results of the classical theory—including some new extensions for fractional orders—in a selfcontained, accessible fashion. In particular, we prove that the Bspline component is entirely responsible for five key wavelet properties: order of approximation, reproduction of polynomials, vanishing moments, multiscale differentiation property, and smoothness (regularity) of the basis functions. We also investigate the interaction of wavelets with differential operators giving explicit time domain formulas for the fractional derivatives of the basis functions. This allows us to specify a corresponding dual wavelet basis and helps us understand why the wavelet transform provides a stable characterization of the derivatives of a signal. Additional results include a new peeling theory of smoothness, leading to the extended notion of wavelet differentiability in thesense and a sharper theorem stating that smoothness implies order. Index Terms—Approximation order, Besov spaces, Hölder smoothness, multiscale differentiation, splines, vanishing moments, wavelets. I.