Results 1  10
of
35
Reproducing Kernels of Generalized Sobolev Spaces via a Green Function Approach with Distributional Operators
"... In this paper we extend the definition of generalized Sobolev space and subsequent theoretical results established recently for positive definite kernels and differential operators in the article [21]. In the present paper the semiinner product of the generalized Sobolev space is set up by a vecto ..."
Abstract

Cited by 15 (9 self)
 Add to MetaCart
(Show Context)
In this paper we extend the definition of generalized Sobolev space and subsequent theoretical results established recently for positive definite kernels and differential operators in the article [21]. In the present paper the semiinner product of the generalized Sobolev space is set up by a vector distributional operator P consisting of finitely or countably many distributional operators Pn, which are defined on the dual space of the Schwartz space. The types of operators we now consider include not only differential operators, but also more general distributional operators such as pseudodifferential operators. We deduce that a certain appropriate fullspace Green function G with respect to L: = P ∗T P now becomes a conditionally positive function. In order to support this claim we ensure that the distributional adjoint operator P ∗ of P is welldefined in the distributional sense. Under sufficient conditions, the native space (reproducingkernel Hilbert space) associated with the Green function G can be imbedded into or even be equivalent to a generalized Sobolev space. As an application, we take linear combinations of translates of the Green function with possibly added polynomial terms and construct a multivariate minimumnorm interpolant s f,X to data values sampled from an unknown generalized Sobolev function f at data sites located in some set X ⊂ R d. We will provide several examples, such as Matérn kernels or Gaussian kernels, that illustrate how many reproducingkernel Hilbert spaces of wellknown reproducing kernels are equivalent to a generalized Sobolev space.
Unsymmetric Meshless Methods for Operator Equations
 NUMERISCHE MATHEMATIK
"... A general framework for proving error bounds and convergence of a large class of unsymmetric meshless numerical methods for solving wellposed linear operator equations is presented. The results provide optimal convergence rates, if the test and trial spaces satisfy a stability condition. Operators ..."
Abstract

Cited by 12 (9 self)
 Add to MetaCart
(Show Context)
A general framework for proving error bounds and convergence of a large class of unsymmetric meshless numerical methods for solving wellposed linear operator equations is presented. The results provide optimal convergence rates, if the test and trial spaces satisfy a stability condition. Operators need not be elliptic, and the problems can be posed in weak or strong form without changing the theory. Nonstationary kernelbased trial and test spaces are shown to fit into the framework, disregarding the operator equation. As a special case, unsymmetric meshless kernelbased methods solving weakly posed problems with distributional data are treated in some detail. This provides a foundation of certain variations of the “Meshless Local PetrovGalerkin” (MLPG) technique of S.N. Atluri and collaborators.
Reconstructing signals with finite rate of innovation from noisy samples
 Acta Appl. Math
"... Abstract. A signal is said to have finite rate of innovation if it has a finite number of degrees of freedom per unit of time. Reconstructing signals with finite rate of innovation from their exact average samples has been studied in SIAM J. Math. Anal., 38(2006), 13891422. In this paper, we consid ..."
Abstract

Cited by 9 (8 self)
 Add to MetaCart
(Show Context)
Abstract. A signal is said to have finite rate of innovation if it has a finite number of degrees of freedom per unit of time. Reconstructing signals with finite rate of innovation from their exact average samples has been studied in SIAM J. Math. Anal., 38(2006), 13891422. In this paper, we consider the problem of reconstructing signals with finite rate of innovation from their average samples in the presence of deterministic and random noise. We develop an adaptive Tikhonov regularization approach to this reconstruction problem. Our simulation results demonstrate that our adaptive approach is robust against noise, is almost consistent in various sampling processes, and is also locally implementable. 1.
Sampling inequalities for infinitely smooth functions, with applications to interpolation and machine learning
, 2006
"... Sampling inequalities give a precise formulation of the fact that a differentiable function cannot attain large values, if its derivatives are bounded and if it is small on a sufficiently dense discrete set. Sampling inequalities can be applied to the difference of a function and its reconstruction ..."
Abstract

Cited by 9 (2 self)
 Add to MetaCart
Sampling inequalities give a precise formulation of the fact that a differentiable function cannot attain large values, if its derivatives are bounded and if it is small on a sufficiently dense discrete set. Sampling inequalities can be applied to the difference of a function and its reconstruction in order to obtain (sometimes optimal) convergence orders for very general possibly regularized recovery processes. So far, there are only sampling inequalities for finitely smooth functions, which lead to algebraic convergence orders. In this paper the case of infinitely smooth functions is investigated, in order to derive error estimates which lead to exponential convergence orders.
Recovery of functions from weak data using unsymmetric meshless kernelbased methods
, 2006
"... Recent engineering applications successfully introduced unsymmetric meshless local PetrovGalerkin (MLPG) schemes. As a step towards their mathematical analysis, this paper investigates nonstationary unsymmetric PetrovGalerkintype meshless kernelbased methods for the recovery of L2 functions from ..."
Abstract

Cited by 8 (7 self)
 Add to MetaCart
(Show Context)
Recent engineering applications successfully introduced unsymmetric meshless local PetrovGalerkin (MLPG) schemes. As a step towards their mathematical analysis, this paper investigates nonstationary unsymmetric PetrovGalerkintype meshless kernelbased methods for the recovery of L2 functions from finitely many weak data. The results cover solvability conditions and error bounds in negative Sobolev norms with optimal rates. These rates are mainly determined by the approximation properties of the trial space, while choosing sufficiently many test functions ensures stability. Numerical examples are provided, supporting the theoretical results and leading to new questions for future research.
The Missing Wendland Functions
"... The Wendland radial basis functions [14,15] are piecewise polynomial compactly supported reproducing kernels in Hilbert spaces which are norm–equivalent to Sobolev spaces. But they only cover the Sobolev spaces H d/2+k+1/2 (R d), k ∈ N (1) and leave out the integer order spaces in even dimensions. ..."
Abstract

Cited by 7 (2 self)
 Add to MetaCart
The Wendland radial basis functions [14,15] are piecewise polynomial compactly supported reproducing kernels in Hilbert spaces which are norm–equivalent to Sobolev spaces. But they only cover the Sobolev spaces H d/2+k+1/2 (R d), k ∈ N (1) and leave out the integer order spaces in even dimensions. We derive the missing Wendland functions working for half–integer k and even dimensions, reproducing integer–order Sobolev spaces in even dimensions, but they turn out to have two additional non–polynomial terms: a logarithm and a square root. To give these functions a solid mathematical foundation, a generalized version of the “dimension walk” is applied. While the classical dimension walk proceeds in steps of two space dimensions taking single derivatives, the new one proceeds in steps of single dimensions and uses “halved” derivatives of fractional calculus.
Sampling and Stability
"... Abstract. In Numerical Analysis one often has to conclude that an error function is small everywhere if it is small on a large discrete point set and if there is a bound on a derivative. Sampling inequalities put this onto a solid mathematical basis. A stability inequality is similar, but holds only ..."
Abstract

Cited by 6 (3 self)
 Add to MetaCart
(Show Context)
Abstract. In Numerical Analysis one often has to conclude that an error function is small everywhere if it is small on a large discrete point set and if there is a bound on a derivative. Sampling inequalities put this onto a solid mathematical basis. A stability inequality is similar, but holds only on a finite–dimensional space of trial functions. It allows to bound a trial function by a norm on a sufficiently fine data sample, without any bound on a high derivative. This survey first describes these two types of inequalities in general and shows how to derive a stability inequality from a sampling inequality plus an inverse inequality on a finite–dimensional trial space. Then the state–of–the–art in sampling inequalities is reviewed, and new extensions involving functions of infinite smoothness and sampling operators using weak data are presented. Finally, typical applications of sampling and stability inequalities for
ON DIMENSIONINDEPENDENT RATES OF CONVERGENCE FOR FUNCTION APPROXIMATION WITH GAUSSIAN KERNELS
, 2012
"... This article studies the problem of approximating functions belonging to a Hilbert space Hd with an isotropic or anisotropic translation invariant (or stationary) reproducing kernel with special attention given to the Gaussian kernel Kd(x, t) =exp ( − ∑d ℓ=1 γ2 ℓ (xℓ − tℓ) 2) for all x, t ∈ Rd. Th ..."
Abstract

Cited by 5 (4 self)
 Add to MetaCart
(Show Context)
This article studies the problem of approximating functions belonging to a Hilbert space Hd with an isotropic or anisotropic translation invariant (or stationary) reproducing kernel with special attention given to the Gaussian kernel Kd(x, t) =exp ( − ∑d ℓ=1 γ2 ℓ (xℓ − tℓ) 2) for all x, t ∈ Rd. The isotropic (or radial) case corresponds to using the same shape parameters for all coordinates, i.e., γℓ = γ>0 for all ℓ, whereas the anisotropic case corresponds to varying γℓ. The approximation error of the optimal approximation algorithm, called a meshfree or kriging method, is knowntodecayfasterthananypolynomialinn−1, for fixed d, wherenisthe number of data points. We are especially interested in moderate to large d, which in particular arise in the construction of surrogates for computer experiments. This article presents dimensionindependent error bounds, i.e., the error is bounded by Cn−p,whereCand p are independent of both d and n. This is equivalent to strong polynomial tractability. The pertinent error criterion is the worst case of such an algorithm over the unit ball in Hd, with the error for a single function given by the L2 norm whose weight is also a Gaussian which is used to “localize ” Rd. We consider two classes of algorithms: (i) using data generated by finitely many arbitrary linear functionals, and (ii) using only finitely many function
Dirichletintegral pointsource harmonic interpolation over R 3 spherical interiors: DIDACKS
"... This article addresses the interpolation of harmonic functions over the interior of a R 3 unit sphere by linear combinations of fundamentalsolution pointsource basis functions, where all the sources are assumed to be outside the sphere. While it is natural to formulate approaches to harmonic appro ..."
Abstract

Cited by 4 (4 self)
 Add to MetaCart
(Show Context)
This article addresses the interpolation of harmonic functions over the interior of a R 3 unit sphere by linear combinations of fundamentalsolution pointsource basis functions, where all the sources are assumed to be outside the sphere. While it is natural to formulate approaches to harmonic approximation, interpolation and/or boundary value problems for spherical interiors in terms of minimizing some standard Dirichlet integral, there is no established approach along these lines that yields interpolating solutions for R 3 pointsource basis functions. Here it is shown that by introducing a simple weighting function, exact closedform innerproducts result for the expressions needed; hence, minimizing the appropriate weighted Dirichlet integral yields exact closedform linear equation sets for the source strengths and the fits resulting from these equation sets match all of the prescribed values at the interpolation points. Further, the formalism can be extended in a natural fashion to handle interpolation fits using higherorder pointmultipole basis functions (such as point dipoles and point quadrupoles), so that interpolations for higherorder partials of harmonic functions can be easily implemented. Since the source and field points are in different domains, the fundamentalsolution basis functions are bounded and can be regarded as defining a new type of kernel space that is related to, but distinct from, a reproducing
A Dirichletintegral based dualaccess collocationkernel approach to pointsource gravityfield modeling ∗
, 2008
"... Problems in R 3 are addressed where the scalar potential of an associated vector field satisfies Laplace’s equation in some unbounded external region and is to be approximated by unknown (point) sources contained in the complimentary subregion. Two specific field geometries are considered: R 3 half ..."
Abstract

Cited by 4 (4 self)
 Add to MetaCart
(Show Context)
Problems in R 3 are addressed where the scalar potential of an associated vector field satisfies Laplace’s equation in some unbounded external region and is to be approximated by unknown (point) sources contained in the complimentary subregion. Two specific field geometries are considered: R 3 halfspace and the exterior of an R 3 sphere, which are the two standard settings for geophysical and geoexploration gravitational problems. For these geometries it is shown that a new type of kernel space exists, which is labeled a Dirichletintegral dualaccess collocationkernel space (DIDACKS) and that is well suited for many applications. The DIDACKS examples studied are related to reproducing kernel Hilbert spaces and they have a replicating kernel (as opposed to a reproducing kernel) that has the ubiquitous form of the inverse of the distance between a field point and a corresponding source point. Underpinning this approach are three basic mathematical relationships of general interest. Two of these relationships—corresponding to the two geometries—yield exact closedform inner products and thus exact linear equation sets for the corresponding point source strengths of various types (i.e., point mass, point dipole and/or point quadrupole sets) at specified source locations. The given field is reconstructed not only in a point collocation sense, but also in a (weighted) fieldenergy errorminimization sense. Key words. Laplace’s equation, inverse problem, Dirichlet form, point collocation, reproducing kernels, fundamental solutions, point sources, multipole, potential theory