Results 1  10
of
13
Error Estimates for Multilevel Approximation Using Polyharmonic Splines
, 2001
"... Polyharmonic splines are used to interpolate data in a stationary multilevel iterative refinement scheme. By using such functions the necessary tools are provided to obtain simple pointwise error bounds on the approximation. ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
Polyharmonic splines are used to interpolate data in a stationary multilevel iterative refinement scheme. By using such functions the necessary tools are provided to obtain simple pointwise error bounds on the approximation.
Generalized regularized leastsquares learning with predefined features in a Hilbert space
 In B. Schölkopf, J. Platt and T. Hoffman (Eds.), Advances in
, 2007
"... Kernelbased regularized learning seeks a model in a hypothesis space by minimizing the empirical error and the model’s complexity. Based on the representer theorem, the solution consists of a linear combination of translates of a kernel. This paper investigates a generalized form of representer the ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
Kernelbased regularized learning seeks a model in a hypothesis space by minimizing the empirical error and the model’s complexity. Based on the representer theorem, the solution consists of a linear combination of translates of a kernel. This paper investigates a generalized form of representer theorem for kernelbased learning. After mapping predefined features and translates of a kernel simultaneously onto a hypothesis space by a specific way of constructing kernels, we proposed a new algorithm by utilizing a generalized regularizer which leaves part of the space unregularized. Using a squaredloss function in calculating the empirical error, a simple convex solution is obtained which combines predefined features with translates of the kernel. Empirical evaluations have confirmed the effectiveness of the algorithm for supervised learning tasks. 1
The L2approximation order of surface spline interpolation
 MATH. COMP
, 2000
"... We show that if the open, bounded domain Ω ⊂ R d has a sufficiently smooth boundary and if the data function f is sufficiently smooth, then the Lp(Ω)norm of the error between f and its surface spline interpolant is O(δ γp+1/2)(1 ≤ p ≤∞), where γp: = min{m, m − d/2 +d/p} and m is an integer paramet ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
We show that if the open, bounded domain Ω ⊂ R d has a sufficiently smooth boundary and if the data function f is sufficiently smooth, then the Lp(Ω)norm of the error between f and its surface spline interpolant is O(δ γp+1/2)(1 ≤ p ≤∞), where γp: = min{m, m − d/2 +d/p} and m is an integer parameter specifying the surface spline. In case p =2,thislower bound on the approximation order agrees with a previously obtained upper bound, and so we conclude that the L2approximation order of surface spline interpolation is m +1/2.
Distance function wavelets  Part III: "Exotic" transforms and series
, 2002
"... This paper also briefly discusses and conjectures the DFW correspondences of a variety of coordinate variable transforms and series. Practically important, the anisotropic and inhomogeneous DFW's are developed by using the geodesic distance variable. The DFW and the related basis functions are also ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
This paper also briefly discusses and conjectures the DFW correspondences of a variety of coordinate variable transforms and series. Practically important, the anisotropic and inhomogeneous DFW's are developed by using the geodesic distance variable. The DFW and the related basis functions are also used in making the kernel distance sigrnoidal functions, which are potentially useful in the artificial neural network and machine learning. As or even worse than the preceding two reports, this study scarifies mathematical rigor and in mm unfetter imagination. Most results are intuitively obtained without rigorous analysis. Followup research is still under way. The paper is intended to inspire more research into this promising area
Direct Forms for Seminorms Arising in the Theory of Interpolation by Translates of a Basis Function
 Adv. Comput. Math
, 1999
"... In the error analysis of the process of interpolation by translates of a single basis function, certain spaces of functions arise naturally. These spaces are deøned with respect to a seminorm which is given in terms of the Fourier transform of the function. We call this an indirect seminorm. In cert ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
In the error analysis of the process of interpolation by translates of a single basis function, certain spaces of functions arise naturally. These spaces are deøned with respect to a seminorm which is given in terms of the Fourier transform of the function. We call this an indirect seminorm. In certain wellunderstood cases, the seminorm can be rewritten trivially in terms of the function, rather than its Fourier transform. We call this a direct seminorm. The direct form allows better error estimates to be obtained. In this paper, we show how to rewrite most of the commonly arising indirect form seminorms in direct form, and begin a little of the analysis required to obtain the improved error estimates. 1 Introduction In this paper we want to consider various seminorms which arise naturally in some very important interpolation problems, particularly those involving interpolation by radial basis functions. The easiest example to consider is the seminorm jf j = i X jffj=k c ff Z IR...
The Uniform Convergence of Multivariate Natural Splines
, 1997
"... Let f be a function from R to R that has square integrable qth order partial derivatives, where q ? d=2. ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Let f be a function from R to R that has square integrable qth order partial derivatives, where q ? d=2.
Generalizing the Bias Term of Support Vector Machines
 In: Proceedings of the International Conference on Artificial Intelligence. 2007
"... Based on the study of a generalized form of representer theorem and a specific trick in constructing kernels, a generic learning model is proposed and applied to support vector machines. An algorithm is obtained which naturally generalizes the bias term of SVM. Unlike the solution of standard SVM wh ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Based on the study of a generalized form of representer theorem and a specific trick in constructing kernels, a generic learning model is proposed and applied to support vector machines. An algorithm is obtained which naturally generalizes the bias term of SVM. Unlike the solution of standard SVM which consists of a linear expansion of kernel functions and a bias term, the generalized algorithm maps predefined features onto a Hilbert space as well and takes them into special consideration by leaving part of the space unregularized when seeking a solution in the space. Empirical evaluations have confirmed the effectiveness from the generalization in classification tasks. 1
Recursive Kernels
, 2007
"... This paper is an extension of earlier papers [8, 9] on the “native” Hilbert spaces of functions on some domain Ω ⊂ IR d in which conditionally positive definite kernels are reproducing kernels. Here, the focus is on subspaces of native spaces which are induced via subsets of Ω, and we shall derive a ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
This paper is an extension of earlier papers [8, 9] on the “native” Hilbert spaces of functions on some domain Ω ⊂ IR d in which conditionally positive definite kernels are reproducing kernels. Here, the focus is on subspaces of native spaces which are induced via subsets of Ω, and we shall derive a recursive subspace structure of these, leading to recursively defined reproducing kernels. As an application, we get a recursive Neville–Aitken–type interpolation process and a recursively defined orthogonal basis for interpolation by translates of kernels.
Convergence of Euclidean Radial Basis Approximation on Spheres
"... In this paper we investigate the convergence of radial basis interpolation when all of the data lie on a sphere. Here, we use strictly conditionally positive definite radial functions defined in the ambient Euclidean space. These are strictly conditionally positive definite of the same order when re ..."
Abstract
 Add to MetaCart
In this paper we investigate the convergence of radial basis interpolation when all of the data lie on a sphere. Here, we use strictly conditionally positive definite radial functions defined in the ambient Euclidean space. These are strictly conditionally positive definite of the same order when restricted to the sphere. The analysis of v. Golitschek and Light [9] can then be used to give an error estimate for radial approximation in Euclidean space when the data is restricted to the sphere. 1 Introduction Let S d be the unit sphere in IR d+1 , \Pi n denote the polynomials of degree n in IR d+1 and P n denote the polynomials on S d . We will think of P n as the restrictions of polynomials from \Pi n to the sphere. Let juxtaposition xx i be used to denote the usual inner product of vectors in IR d+1 . A univariate function OE : IR + ! IR is said to be strictly conditionally positive definite of order m on IR d+1 if, for all N 2 IN, and points 1 x 1 ; x 2 ; : : : ; xN 2 I...
Noname manuscript No. (will be inserted by the editor) Reproducing Kernels of Generalized Sobolev Spaces via a Green Function Approach with Distributional Operators
"... Abstract In this paper we extend the definition of generalized Sobolev space and subsequent theoretical results established recently for positive definite kernels and differential operators in the article [21]. In the present paper the semiinner product of the generalized Sobolev space is set up by ..."
Abstract
 Add to MetaCart
Abstract In this paper we extend the definition of generalized Sobolev space and subsequent theoretical results established recently for positive definite kernels and differential operators in the article [21]. In the present paper the semiinner product of the generalized Sobolev space is set up by a vector distributional operator P consisting of finitely or countably many distributional operators Pn, which are defined on the dual space of the Schwartz space. The types of operators we now consider include not only differential operators, but also more general distributional operators such as pseudodifferential operators. We deduce that a certain appropriate fullspace Green function G with respect to L: = P ∗T P now becomes a conditionally positive function. In order to support this claim we ensure that the distributional adjoint operator P ∗ of P is welldefined in the distributional sense. Under sufficient conditions, the native space (reproducingkernel Hilbert space) associated with the Green function G can be imbedded into or even be equivalent to a generalized Sobolev space. As an application, we take linear combinations of translates of the Green function with possibly added polynomial terms and construct a multivariate minimumnorm interpolant s f,X to data values sampled from an unknown generalized Sobolev function f at data sites located in some set X ⊂ R d. We will provide several examples, such as Matérn kernels or Gaussian kernels, that illustrate how many reproducingkernel Hilbert spaces of wellknown reproducing kernels are equivalent to a generalized Sobolev space.