Results 1 
9 of
9
On a Kernelbased Method for Pattern Recognition, Regression, Approximation, and Operator Inversion
, 1997
"... We present a Kernelbased framework for Pattern Recognition, Regression Estimation, Function Approximation and multiple Operator Inversion. Previous approaches such as ridgeregression, Support Vector methods and regression by Smoothing Kernels are included as special cases. We will show connection ..."
Abstract

Cited by 77 (25 self)
 Add to MetaCart
We present a Kernelbased framework for Pattern Recognition, Regression Estimation, Function Approximation and multiple Operator Inversion. Previous approaches such as ridgeregression, Support Vector methods and regression by Smoothing Kernels are included as special cases. We will show connections between the costfunction and some properties up to now believed to apply to Support Vector Machines only. The optimal solution of all the problems described above can be found by solving a simple quadratic programming problem. The paper closes with a proof of the equivalence between Support Vector kernels and Greene's functions of regularization operators.
Neural Networks in System Identification
, 1994
"... . Neural Networks are nonlinear blackbox model structures, to be used with conventional parameter estimation methods. They have good general approximation capabilities for reasonable nonlinear systems. When estimating the parameters in these structures, there is also good adaptability to conce ..."
Abstract

Cited by 10 (3 self)
 Add to MetaCart
. Neural Networks are nonlinear blackbox model structures, to be used with conventional parameter estimation methods. They have good general approximation capabilities for reasonable nonlinear systems. When estimating the parameters in these structures, there is also good adaptability to concentrate on those parameters that have the most importance for the particular data set. Key Words. Neural Networks, Parameter estimation, Model Structures, NonLinear Systems. 1. EXECUTIVE SUMMARY 1.1. Purpose The purpose of this tutorial is to explain how Artificial Neural Networks (NN) can be used to solve problems in System Identification, to focus on some key problems and algorithmic questions for this, as well as to point to the relationships with more traditional estimation techniques. We also try to remove some of the "mystique" that sometimes has accompanied the Neural Network approach. 1.2. What's the problem? The identification problem is to infer relationships between past inp...
Training Neural Networks with Noisy Data as an IllPosed Problem
 Adv. Comp. Math
, 2000
"... This paper is devoted to the analysis of network approximation in the framework of approximation and regularization theory. It is shown that training neural networks and similar network approximation techniques are equivalent to leastsquares collocation for a corresponding integral equation with mo ..."
Abstract

Cited by 5 (5 self)
 Add to MetaCart
This paper is devoted to the analysis of network approximation in the framework of approximation and regularization theory. It is shown that training neural networks and similar network approximation techniques are equivalent to leastsquares collocation for a corresponding integral equation with mollified data. Results about convergence and convergence rates for exact data are derived based upon wellknown convergence results about leastsquares collocation. Finally, the stability properties with respect to errors in the data are examined and stability bounds are obtained, which yield rules for the choice of the number of network elements. Keywords: illposed problems, leastsquares collocation, neural networks, network training, regularization. AMS Subject Classification: 41A15, 41A30, 45L10, 65J20, 92B20. Short Title: Training Neural Networks with Noisy Data. 1
Uniform Approximation of Functions with Random Bases
"... Abstract — Random networks of nonlinear functions have a long history of empirical success in function fitting but few theoretical guarantees. In this paper, using techniques from probability on Banach Spaces, we analyze a specific architecture of random nonlinearities, provide L ∞ and L2 error boun ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
Abstract — Random networks of nonlinear functions have a long history of empirical success in function fitting but few theoretical guarantees. In this paper, using techniques from probability on Banach Spaces, we analyze a specific architecture of random nonlinearities, provide L ∞ and L2 error bounds for approximating functions in Reproducing Kernel Hilbert Spaces, and discuss scenarios when these expansions are dense in the continuous functions. We discuss connections between these random nonlinear networks and popular machine learning algorithms and show experimentally that these networks provide competitive performance at far lower computational cost on largescale pattern recognition tasks. I.
Analysis of Tikhonov Regularization for Function Approximation by Neural Networks
 Neural Networks
, 2001
"... . This paper is devoted to the convergence and stability analysis of Tikhonov regularization for function approximation by a class of feedforward neural networks with one hidden layer and linear output layer. We investigate two frequently used approaches, namely regularization by output smoothing a ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
. This paper is devoted to the convergence and stability analysis of Tikhonov regularization for function approximation by a class of feedforward neural networks with one hidden layer and linear output layer. We investigate two frequently used approaches, namely regularization by output smoothing and regularization by weight decay, as well as a combination of both methods to combine their advantages. We show that in all cases stable approximations are obtained converging to the approximated function in a desired Sobolev space as the noise in the data tends to zero (in the weaker L 2 norm) if the regularization parameter and the number of units in the network are chosen appropriately. Under additional smoothness assumptions we are able to show convergence rates results in terms of the noise level and the number of units in the network. In addition, we show how the theoretical results can be applied to the important classes of perceptrons with one hidden layer and to translation networks. Finally, the performance of the different approaches is compared in some numerical examples. Key Words: Illposed problems, neural networks, Tikhonov regularization, output smoothing, weight decay, function approximation. AMS Subject Classifications: 65J20, 92B20, 41A30. 1.
LQ performance bounds for adaptive output feedback controllers for functionally uncertain nonlinear systems
, 2002
"... ..."
Constructive Function Approximation: Theory and Practice
 In Intelligent Methods in Signal Processing and Communications
, 1997
"... In this paper we study the theoretical limits of finite constructive convex approximations of a given function in a Hilbert space using elements taken from a reduced subset. We also investigate the tradeo# between the global error and the partial error during the iterations of the solution. The ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
In this paper we study the theoretical limits of finite constructive convex approximations of a given function in a Hilbert space using elements taken from a reduced subset. We also investigate the tradeo# between the global error and the partial error during the iterations of the solution. These results are then specialized to constructive function approximation using sigmoidal neural networks. The emphasis then shifts to the implementation issues associated with the problem of achieving given approximation errors when using a finite number of nodes and a finite data set for training.
Function Approximation by ThreeLayered Networks and Its Error Bounds  An Integral Representation Theorem
, 1994
"... Neural Networks are widely noticed to provide a nonlinear function approximation method. In order to make its approximation ability clear, a new theorem on an integral transform of ridge functions is presented. By using this theorem, an approximation bound, which clarifies the quantitative relations ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
Neural Networks are widely noticed to provide a nonlinear function approximation method. In order to make its approximation ability clear, a new theorem on an integral transform of ridge functions is presented. By using this theorem, an approximation bound, which clarifies the quantitative relationship between the approximation accuracy and the number of elements in the hidden layer, can be obtained. This result shows that the approximation accuracy depends on the smoothness of target functions. It also shows that the approximation methods which use ridge functions are free from "curse of dimensionality ". 1 Overview In the middle of the 1980s, computational research on neural networks was activated by the works of the Parallel Distributed Processing (PDP) group, and multilayered networks having sigmoidal output functions together with backpropagation learning played important roles in this movement. Many kinds of examples provided by the PDP group attracted interest of other rese...
Approximation Properties of Local Bases Assembled From Neural Network Transfer Functions
, 1997
"... The adaptive datadriven emulation and control of mechanical systems are popular applications of artificial neural networks in engineering. However, multilayer perceptron training is an illposed nonlinear optimization problem. This paper explores a method to constrain network parameters so that co ..."
Abstract
 Add to MetaCart
The adaptive datadriven emulation and control of mechanical systems are popular applications of artificial neural networks in engineering. However, multilayer perceptron training is an illposed nonlinear optimization problem. This paper explores a method to constrain network parameters so that conventional computational techniques for function approximation can be used during training. This was accomplished by forming local basis functions which provide accurate approximation and stable evaluation of the network parameters. It is noted that this approach is quite general and does not violate the principles of network architecture. By employing the concept of shift invariant subspaces, this approach yields a new and more robust error condition for feedforward artificial neural networks and allows one to both characterize and control the accuracy of the local bases formed. The two methods used are: 1) adding bases while altering their shape and keeping their spacing constant and 2) ad...