## Enhancing Fault Tolerance of Radial Basis Functions

Citations: | 1 - 0 self |

### BibTeX

@MISC{Eickhoff_enhancingfault,

author = {Ralf Eickhoff and Student Member and Ulrich Rückert},

title = {Enhancing Fault Tolerance of Radial Basis Functions},

year = {}

}

### OpenURL

### Abstract

The challenge of future nanoelectronic applications, e.g. in quantum computing or in molecular computing, is to assure reliable computation facing a growing number of malfunctioning and failing computational units. Modeled on biology artificial neural networks are intended to be one preferred architecture for these applications because their architectures allow distributed information processing and, therefore, will result in tolerance to malfunctioning neurons and in robustness to noise. In this work, methods to enhance fault tolerance to permanently failing neurons of Radial Basis Function networks are investigated for function approximation applications. Therefore, a relevance measure is introduced which can be used to enhance the fault tolerance or, on the contrary, to control the network complexity if it is used for pruning.

### Citations

3629 |
Neural Networks: A Comprehensive Foundation (2 nd ed
- Haykin
- 1999
(Show Context)
Citation Context ...lerance or, on the contrary, to control the network complexity if it is used for pruning. I. INTRODUCTION Neural networks are used for function approximation purposes of any continuous functions [1], =-=[2]-=-. Especially, Radial Basis Function (RBF) networks are employed to approximate an unknown function because these networks have mathematically desired transfer functions. Moreover, the main reason why ... |

311 | Regularization theory and neural-network architectures, Neural Computation
- Girosi, Jones, et al.
- 1995
(Show Context)
Citation Context ...which have different centers �ci are superposed and each is denoted by a weight αi to produce the output. RBF networks can be used for local function approximation [9]. Based on regularization theory =-=[10]-=-, the quadratic error is minimized in respect to a stabilizing term. Based on this stabilizer the interpolation and approximation quality is controlled in order to achieve smooth approximations. By us... |

180 |
The Handbook of Brain Theory and Neural Networks, Second revised edition
- Arbib
(Show Context)
Citation Context ...architectures are capable to adapt to changing environments. From biology it can be observed that biological neural networks are fault tolerant to the loss of several neurons and robust to noise [2], =-=[3]-=-. Imprecise such as noisy or incomplete inputs can be processed by biological neural networks and reliable computation of these systems can be further assured although system behavior has been changed... |

142 |
Netlab: Algorithms for Pattern Recognition
- Nabney
- 2001
(Show Context)
Citation Context ...ons are used [16], [17]. The training set consists of 1000 randomly generated input vectors based on the density function. The Basis Function network is trained by using the NETLAB toolbox for MATLAB =-=[18]-=- in order to achieve a mean squared error of 0.1 to the learn data. The training algorithm determines the centers of the network by fitting a Gaussian mixture model with circular covariances using the... |

132 |
Fundamentals of Error-Correcting Codes
- Huffman, Pless
- 2003
(Show Context)
Citation Context ...djusting the output weights of the added neurons their output values can be used to determine the value which is in the majority. Consequently, this procedure is similar to the use of repetition code =-=[15]-=-. Therefore, the value outnumbering the other outputs finally determines the output value of the original neuron. For continuous values confidence intervals can be used to determine the output value c... |

129 |
An automatic method for finding the greatest or least value of a Cunction
- Rosenbrock
- 1960
(Show Context)
Citation Context ...robability. C. Computational Results Here, some computational results are shown for function approximation purpose of RBF networks. For the approximated function several test functions are used [16], =-=[17]-=-. The training set consists of 1000 randomly generated input vectors based on the density function. The Basis Function network is trained by using the NETLAB toolbox for MATLAB [18] in order to achiev... |

96 | Networks and the best approximation property
- Girosi, Poggio
- 1990
(Show Context)
Citation Context ...At all, m different Basis Functions which have different centers �ci are superposed and each is denoted by a weight αi to produce the output. RBF networks can be used for local function approximation =-=[9]-=-. Based on regularization theory [10], the quadratic error is minimized in respect to a stabilizing term. Based on this stabilizer the interpolation and approximation quality is controlled in order to... |

91 |
Evaluating the CMA evolution strategy on multimodal test functions
- Hansen, Kern
(Show Context)
Citation Context ... the probability. C. Computational Results Here, some computational results are shown for function approximation purpose of RBF networks. For the approximated function several test functions are used =-=[16]-=-, [17]. The training set consists of 1000 randomly generated input vectors based on the density function. The Basis Function network is trained by using the NETLAB toolbox for MATLAB [18] in order to ... |

32 |
C.: Convex Analysis and Minimization Algorithms. I: Fundamentals, Grundlehren der
- Hiriart-Urruty, Lemaréchal
- 1993
(Show Context)
Citation Context ... same hardware, but due to its parallel architecture neural networks seem to be fault tolerant and, therefore, suitable in those (nano-) technologies.sleads to a minimization problem with constraints =-=[14]-=- min 1 M M� (fm (�xi, �w) − yi) �w ∈ IR (2+n)·m , (14) i=1 sk (�w) − bk ≤ 0 for k = 1,...,m (15) where �w denotes the parameter vector including all variances, output weights and centers, bk the maxim... |

24 |
A defect- and fault-tolerant architecture for nanocomputers,” Nanotechnology
- Han, Jonker
- 2003
(Show Context)
Citation Context ...c technology, e.g. in quantum or molecular computing [4], one major challenge and problem will be to assure computational processes under more unreliable processing units 1 than in today’s technology =-=[5]-=-. Artificial networks are supposed to model their biological counterparts and, therefore, one can assume that desired characteristics like fault tolerance and robustness are adopted [3]. As was shown ... |

17 |
A constructive method for multivariate function approximation by multlayer perceptrons
- Geva, Sitte
- 1992
(Show Context)
Citation Context ...lt tolerance or, on the contrary, to control the network complexity if it is used for pruning. I. INTRODUCTION Neural networks are used for function approximation purposes of any continuous functions =-=[1]-=-, [2]. Especially, Radial Basis Function (RBF) networks are employed to approximate an unknown function because these networks have mathematically desired transfer functions. Moreover, the main reason... |

17 | A generalized growing and pruning RBF (GGAP-RBF) neural netowrk for function approximation
- Huang, Saratchandran, et al.
- 2005
(Show Context)
Citation Context ...ned as sk = E {d q (y, ˆy)} = � X |αk| q e − q��x−�c k�2 2σ2 k pX(�x)d�x (6) (6) specifies the statistical contribution of the k-th neuron to the output of an RBF network and it is similar defined in =-=[11]-=-. The parameter q > 0 determines the used Lq norm as distance measurement or can be seen as higher order moments of the random variable. The significance of each neuron depends on the density function... |

15 | Fault Tolerance in Artificial Neural Networks
- Bolt
- 1992
(Show Context)
Citation Context ... networks one has to distinguish between two phases of neural network processing: training and operation. The effects of faults occurring in both periods normally differ in neural network’s lifecycle =-=[13]-=-. Moreover, within these two distinct periods also two various types of fault locations can be identified. On the one hand, there exist stable elements whose functionality does not change at any time ... |

11 | Overview of Nanoelectronic Devices
- Goldhaber-Gordon, Montemerlo, et al.
- 1997
(Show Context)
Citation Context ... The redundancy in its structure allows the networks to loose neurons without severely impairing the computational process. In future nanoelectronic technology, e.g. in quantum or molecular computing =-=[4]-=-, one major challenge and problem will be to assure computational processes under more unreliable processing units 1 than in today’s technology [5]. Artificial networks are supposed to model their bio... |

8 |
Feedforward sigmoidal networks: Equicontinuity and fault-tolerance properties
- Chandra, Singh
(Show Context)
Citation Context ...rtificial networks are supposed to model their biological counterparts and, therefore, one can assume that desired characteristics like fault tolerance and robustness are adopted [3]. As was shown in =-=[6]-=-, [7] artificial neural networks are only immune to noisy inputs and fault tolerant to malfunctioning Ralf Eickhoff is with the Heinz Nixdorf Institute, System and Circuit Technology, University of Pa... |

3 |
Robustness of Radial Basis Functions
- Eickhoff, Rückert
- 2005
(Show Context)
Citation Context ...cial networks are supposed to model their biological counterparts and, therefore, one can assume that desired characteristics like fault tolerance and robustness are adopted [3]. As was shown in [6], =-=[7]-=- artificial neural networks are only immune to noisy inputs and fault tolerant to malfunctioning Ralf Eickhoff is with the Heinz Nixdorf Institute, System and Circuit Technology, University of Paderbo... |

2 |
Semendyayev, Handbook of mathematics (3rd
- Bronstein, A
- 1997
(Show Context)
Citation Context ... analysis can be performed for additional density distributions as well. A. Uniform Distribution If the test vectors are uniformly drawn from the range X the multivariate density function is given by =-=[12]-=- n� pX(�x) = pXi i=1 (xi) n� � 1 : di − bi = i=1 bi 2 ≤ x ≤ di + bi 2 0 : else (7) where di is the mean of the uniform distribution and bi is the width of the distribution. Because σk ∈ IR \ {0} is fu... |