Results 1  10
of
176
Regularization Theory and Neural Networks Architectures
 Neural Computation
, 1995
"... We had previously shown that regularization principles lead to approximation schemes which are equivalent to networks with one layer of hidden units, called Regularization Networks. In particular, standard smoothness functionals lead to a subclass of regularization networks, the well known Radial Ba ..."
Abstract

Cited by 309 (31 self)
 Add to MetaCart
We had previously shown that regularization principles lead to approximation schemes which are equivalent to networks with one layer of hidden units, called Regularization Networks. In particular, standard smoothness functionals lead to a subclass of regularization networks, the well known Radial Basis Functions approximation schemes. This paper shows that regularization networks encompass a much broader range of approximation schemes, including many of the popular general additive models and some of the neural networks. In particular, we introduce new classes of smoothness functionals that lead to different classes of basis functions. Additive splines as well as some tensor product splines can be obtained from appropriate classes of smoothness functionals. Furthermore, the same generalization that extends Radial Basis Functions (RBF) to Hyper Basis Functions (HBF) also leads from additive models to ridge approximation models, containing as special cases Breiman's hinge functions, som...
Interpolation of Scattered Data: Distance Matrices and Conditionally Positive Definite Functions
 CONSTRUCTIVE APPROXIMATION
, 1986
"... Among other things, we prove that multiquadric surface interpolation is always solvable, thereby settling a conjecture of R. Franke. ..."
Abstract

Cited by 278 (3 self)
 Add to MetaCart
Among other things, we prove that multiquadric surface interpolation is always solvable, thereby settling a conjecture of R. Franke.
A Theory of Networks for Approximation and Learning
 Laboratory, Massachusetts Institute of Technology
, 1989
"... Learning an inputoutput mapping from a set of examples, of the type that many neural networks have been constructed to perform, can be regarded as synthesizing an approximation of a multidimensional function, that is solving the problem of hypersurface reconstruction. From this point of view, t ..."
Abstract

Cited by 194 (24 self)
 Add to MetaCart
Learning an inputoutput mapping from a set of examples, of the type that many neural networks have been constructed to perform, can be regarded as synthesizing an approximation of a multidimensional function, that is solving the problem of hypersurface reconstruction. From this point of view, this form of learning is closely related to classical approximation techniques, such as generalized splines and regularization theory. This paper considers the problems of an exact representation and, in more detail, of the approximation of linear and nonlinear mappings in terms of simpler functions of fewer variables. Kolmogorov's theorem concerning the representation of functions of several variables in terms of functions of one variable turns out to be almost irrelevant in the context of networks for learning. Wedevelop a theoretical framework for approximation based on regularization techniques that leads to a class of threelayer networks that we call Generalized Radial Basis Functions (GRBF), since they are mathematically related to the wellknown Radial Basis Functions, mainly used for strict interpolation tasks. GRBF networks are not only equivalent to generalized splines, but are also closely related to pattern recognition methods suchasParzen windows and potential functions and to several neural network algorithms, suchas Kanerva's associative memory,backpropagation and Kohonen's topology preserving map. They also haveaninteresting interpretation in terms of prototypes that are synthesized and optimally combined during the learning stage. The paper introduces several extensions and applications of the technique and discusses intriguing analogies with neurobiological data.
An ImageBased Approach to ThreeDimensional Computer Graphics
, 1997
"... The conventional approach to threedimensional computer graphics produces images from geometric scene descriptions by simulating the interaction of light with matter. My research explores an alternative approach that replaces the geometric scene description with perspective images and replaces the s ..."
Abstract

Cited by 167 (4 self)
 Add to MetaCart
The conventional approach to threedimensional computer graphics produces images from geometric scene descriptions by simulating the interaction of light with matter. My research explores an alternative approach that replaces the geometric scene description with perspective images and replaces the simulation process with data interpolation. I derive an imagewarping equation that maps the visible points in a reference image to their correct positions in any desired view. This mapping from reference image to desired image is determined by the centerofprojection and pinholecamera model of the two images and by a generalized disparity value associated with each point in the reference image. This generalized disparity value, which represents the structure of the scene, can be determined from point correspondences between multiple reference images. The imagewarping equation alone is insufficient to synthesize desired images because multiple referenceimage points may map to a single point. I derive a new visibility algorithm that determines a drawing order for the image warp. This algorithm results in correct visibility for the desired image independent of the reference image’s contents. The utility of the imagebased approach can be enhanced with a more general pinholecamera
Bayesian Landmark Learning for Mobile Robot Localization
, 1998
"... . To operate successfully in indoor environments, mobile robots must be able to localize themselves. Most current localization algorithms lack flexibility, autonomy, and often optimality, since they rely on a human to determine what aspects of the sensor data to use in localization (e.g., what landm ..."
Abstract

Cited by 112 (16 self)
 Add to MetaCart
. To operate successfully in indoor environments, mobile robots must be able to localize themselves. Most current localization algorithms lack flexibility, autonomy, and often optimality, since they rely on a human to determine what aspects of the sensor data to use in localization (e.g., what landmarks to use). This paper describes a learning algorithm, called BaLL, that enables mobile robots to learn what features/landmarks are best suited for localization, and also to train artificial neural networks for extracting them from the sensor data. A rigorous Bayesian analysis of probabilistic localization is presented, which produces a rational argument for evaluating features, for selecting them optimally, and for training the networks that approximate the optimal solution. In a systematic experimental study, BaLL outperforms two other recent approaches to mobile robot localization. Keywords: artificial neural networks, Bayesian analysis, feature extraction, landmarks, localization, mobi...
Efficient Memorybased Learning for Robot Control
, 1990
"... This dissertation is about the application of machine learning to robot control. A system which has no initial model of the robot/world dynamics should be able to construct such a model using data received through its sensorsan approach which is formalized here as the $AB (StateActionBehaviour) ..."
Abstract

Cited by 108 (2 self)
 Add to MetaCart
This dissertation is about the application of machine learning to robot control. A system which has no initial model of the robot/world dynamics should be able to construct such a model using data received through its sensorsan approach which is formalized here as the $AB (StateActionBehaviour) control cycle. A method of learning is presented in which all the experiences in the lifetime of the robot are explicitly remembered. The experiences are stored in a manner which permits fast recall of the closest previous experience to any new situation, thus permitting very quick predictions of the effects of proposed actions and, given a goal behaviour, permitting fast generation of a candidate action. The learning can take place in highdimensional nonlinear control spaces with realvalued ranges of variables. Furthermore, the method avoids a number of shortcomings of earlier learning methods in which the controller can become trapped in inadequate performance which does not improve. Also considered is how the system is made resistant to noisy inputs and how it adapts to environmental changes. A well founded mechanism for choosing actions is introduced which solves the experiment/perform dilemma for this domain with adequate computational efficiency, and with fast convergence to the goal behaviour. The dissertation explefins in detail how the $AB control cycle can be integrated into both low and high complexity tasks. The methods and algorithms are evaluated with numerous experiments using both real and simulated robot domefins. The final experiment also illustrates how a compound learning task can be structured into a hierarchy of simple learning tasks.
The approximation power of moving leastsquares
 Math. Comp
, 1998
"... Abstract. A general method for nearbest approximations to functionals on Rd, using scattereddata information is discussed. The method is actually the moving leastsquares method, presented by the BackusGilbert approach. It is shown that the method works very well for interpolation, smoothing and ..."
Abstract

Cited by 108 (6 self)
 Add to MetaCart
Abstract. A general method for nearbest approximations to functionals on Rd, using scattereddata information is discussed. The method is actually the moving leastsquares method, presented by the BackusGilbert approach. It is shown that the method works very well for interpolation, smoothing and derivatives ’ approximations. For the interpolation problem this approach gives Mclain’s method. The method is nearbest in the sense that the local error is bounded in terms of the error of a local best polynomial approximation. The interpolation approximation in Rd is shown to be a C ∞ function, and an approximation order result is proven for quasiuniform sets of data points. 1.
Scattered Data Interpolation with Multilevel Splines
 IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS
, 1997
"... This paper describes a fast algorithm for scattered data interpolation and approximation. Multilevel Bsplines are introduced to compute a C²continuous surface through a set of irregularly spaced points. The algorithm makes use of a coarsetofine hierarchy of control lattices to generate a sequen ..."
Abstract

Cited by 106 (9 self)
 Add to MetaCart
This paper describes a fast algorithm for scattered data interpolation and approximation. Multilevel Bsplines are introduced to compute a C²continuous surface through a set of irregularly spaced points. The algorithm makes use of a coarsetofine hierarchy of control lattices to generate a sequence of bicubic Bspline functions whose sum approaches the desired interpolation function. Large performance gains are realized by using Bspline refinement to reduce the sum of these functions into one equivalent Bspline function. Experimental results demonstrate that highfidelity reconstruction is possible from a selected set of sparse and irregular samples.
Discovering Structure in Multiple Learning Tasks: The TC Algorithm
 In International Conference on Machine Learning
, 1996
"... Recently, there has been an increased interest in "lifelong " machine learning methods, that transfer knowledge across multiple learning tasks. Such methods have repeatedly been found to outperform conventional, singletask learning algorithms when the learning tasks are appropriately related. To in ..."
Abstract

Cited by 89 (3 self)
 Add to MetaCart
Recently, there has been an increased interest in "lifelong " machine learning methods, that transfer knowledge across multiple learning tasks. Such methods have repeatedly been found to outperform conventional, singletask learning algorithms when the learning tasks are appropriately related. To increase robustness of such approaches, methods are desirable that can reason about the relatedness of individuallearning tasks, in order to avoid the danger arising from tasks that are unrelated and thus potentially misleading. This paper describes the taskclustering (TC) algorithm. TC clusters learning tasks into classes of mutually related tasks. When facing a new learning task, TC first determines the most related task cluster, then exploits information selectively from this task cluster only. An empirical study carried out in a mobile robot domain shows that TC outperforms its nonselective counterpart in situations where only a small number of tasks is relevant. 1 INTRODUCTION One of t...
Multistep scattered data interpolation using compactly supported radial basis functions
 J. Comp. Appl. Math
, 1996
"... Abstract. A hierarchical scheme is presented for smoothly interpolating scattered data with radial basis functions of compact support. A nested sequence of subsets of the data is computed efficiently using successive Delaunay triangulations. The scale of the basis function at each level is determine ..."
Abstract

Cited by 64 (12 self)
 Add to MetaCart
Abstract. A hierarchical scheme is presented for smoothly interpolating scattered data with radial basis functions of compact support. A nested sequence of subsets of the data is computed efficiently using successive Delaunay triangulations. The scale of the basis function at each level is determined from the current density of the points using information from the triangulation. The method is rotationally invariant and has good reproduction properties. Moreover the solution can be calculated and evaluated in acceptable computing time. During the last two decades radial basis functions have become a well established tool for multivariate interpolation of both scattered and gridded data; see [2,7,8,22,25] for some surveys. The major part