#### DMCA

## Growing radial basis neural networks: Merging supervised and unsupervised learning with network growth techniques (1997)

Venue: | IEEE Transactions on Neural Networks |

Citations: | 58 - 3 self |

### Citations

3696 |
Learning internal representations by error propagation
- Rumelhart, Hinton, et al.
- 1986
(Show Context)
Citation Context ...ing GRBF networks in these experiments. Two feedforward neural networks with five and ten hidden units were also trained to perform the same classification task by the error backpropagation algorithm =-=[25]-=-. Table III shows the number and percentage of classification errors produced by the trained networks tested and the central processing unit (CPU) time required for their training. All networks tested... |

2068 |
Pattern Recognition with Fuzzy Objective Function Algorithms
- Bezdek
- 1981
(Show Context)
Citation Context ...s the prototypes of the training vectors essentially involve clustering or learning vector quantization (LVQ). The development of clustering algorithms is frequently based on alternating optimization =-=[3]-=-. In contrast, LVQ algorithms map the feature vectors to the prototypes by minimizing a loss function using gradient descent [12], [13]. Clustering is based on the assignment of the feature vectors in... |

1248 | Approximation by superpositions of a sigmoidal function",
- Cybenko
- 1989
(Show Context)
Citation Context ...unctions, RBF networks are capable of approximating arbitrarily well any function. Similar proofs also exist in the literature for conventional feedforward neural models with sigmoidal nonlinearities =-=[5]-=-. (2)sKARAYIANNIS AND MI: GROWING RADIAL BASIS NEURAL NETWORKS 1493 Fig. 1. An RBF neural network. The performance of an RBF network depends on the number and positions of the radial basis functions, ... |

801 | The Cascade-Correlation Learning Architecture",
- Fahlman, Lebiere
- 1990
(Show Context)
Citation Context ...owing important drawback: the number of radial basis functions is determined a priori. This leads to similar problems as the determination of the number of hidden units for multilayer neural networks =-=[6]-=-, [9], [10], [15]. Several researchers attempted to overcome this problem by determining the number and locations of the radial basis function centers using constructive and pruning methods. Fritzke [... |

612 |
Fast learning in networks of locally-tuned processing units
- Moody, Darken
- 1989
(Show Context)
Citation Context ...d number of radial basis function centers selected randomly from the training data [4]; 2) RBF networks employing unsupervised procedures for selecting a fixed number of radial basis function centers =-=[18]-=-; and 3) RBF networks employing supervised procedures for selecting a fixed number of radial basis function centers [11], [24]. Although some of the RBF neural networks mentioned above are reported to... |

600 |
Multivariable functional interpolation and adaptive networks
- Broomhead, Lowe
- 1988
(Show Context)
Citation Context ...on is therefore synonymous with interpolation between the data points along the constrained surface generated by the fitting procedure as the optimum approximation to this mapping. Broomhead and Lowe =-=[4]-=- were the first to exploit the use of radial basis functions in the design of neural networks and to show how RBF networks model nonlinear relationships and implement interpolation. Micchelli [16] sho... |

358 | Interpolation of scattered data: Distance matrices and conditionally positive definite functions,”
- Micchelli
- 1986
(Show Context)
Citation Context ...Lowe [4] were the first to exploit the use of radial basis functions in the design of neural networks and to show how RBF networks model nonlinear relationships and implement interpolation. Micchelli =-=[16]-=- showed that RBF neural networks can produce an interpolating surface which exactly passes through all the pairs of the training set. However, in applications the exact fit is neither useful nor desir... |

337 |
GIROSI F: Regularization algorithms for learning that are equivalent to multilayer networks
- POGGio
- 1990
(Show Context)
Citation Context ...actly passes through all the pairs of the training set. However, in applications the exact fit is neither useful nor desirable since it may produce anomalous interpolation surfaces. Poggio and Girosi =-=[24]-=- viewed the learning process in an RBF network as an ill-posed problem, in the sense that the information in the training data is not sufficient to reconstruct uniquely the mapping in regions where da... |

181 |
Universal approximation using radial basis functions network”,
- Park, Sandberg
- 1991
(Show Context)
Citation Context ...y the mapping in regions where data are not available. Thus learning is closely related to classical approximation techniques, such as generalized splines and regularization theory. Park and Sandberg =-=[22]-=-, [23] proved that RBF networks with one hidden layer are capable of universal approximation. Under certain mild conditions on the radial basis functions, RBF networks are capable of approximating arb... |

165 |
The irises of the Gaspe Peninsula,
- Anderson
- 1935
(Show Context)
Citation Context ...considers each cluster as a fuzzy set, while each feature vector may be assigned to multiple clusters with some degree of certainty measured by the membership function taking values from the interval =-=[0, 1]-=-. The FCM algorithm can be summarized as follows [3]. 1) Select and ; set ; generate the initial set of prototypes: . 2) Set (increment iteration counter). 3) Calculate 4) Calculate . 5) If then go to... |

91 |
On training of radial basis function classifiers,"
- 5Musavi, Ahmed, et al.
- 1992
(Show Context)
Citation Context ...d and Diamond [2] introduced the dynamic decay adjustment (DDA) algorithm, which works along with a constructive method to adjust the decay factor (width) of each radial basis function. Musavi et al. =-=[19]-=- eliminated the redundant prototypes by merging two prototypes at each adaptation cycle. This paper presents an alternative approach for constructing and training growing radial basis function (GRBF) ... |

91 |
Generalized clustering network and Kohonen’s self-organizing schemes,”
- Pal, Bezdek, et al.
- 1993
(Show Context)
Citation Context ... maximally fuzzy partition if . 3) Fuzzy Algorithms for LVQ: Consider the set of samples and let be the probability density function of . LVQ is frequently based on the minimization of the functional =-=[21]-=- which represents the expectation of the loss function In the above definitions, are membership functions which regulate the competition between the prototypes for the input . The specific form of the... |

86 |
Pattern classification using neural networks.
- Lippmann
- 1989
(Show Context)
Citation Context ...performance of GRBF neural networks was evaluated using a set of two-dimensional (2-D) vowel data formed by computing the first two formants F1 and F2 from samples of ten vowels spoken by 67 speakers =-=[17]-=-, [20]. This data set has been extensively used to compare different pattern classification approaches because there is significant overlapping between the vectors corresponding to different vowels in... |

77 |
Growing cell structures-a self-organizing network for unsupervised and supervised learning,”
- Fritzke
- 1994
(Show Context)
Citation Context ...], [9], [10], [15]. Several researchers attempted to overcome this problem by determining the number and locations of the radial basis function centers using constructive and pruning methods. Fritzke =-=[7]-=- attempted to solve this problem by the growing cell structure (GCS), a constructive method that allows RBF neural networks to grow by inserting new prototypes into positions in the feature space wher... |

42 | Boosting the performance of RBF networks with dynamic decay adjustment
- Berthold, Diamond
- 1995
(Show Context)
Citation Context ...he mapping needs more details. Whitehead and Choate [26] proposed a similar approach which evolves space-filling curves to distribute radial basis functions over the input space. Berthold and Diamond =-=[2]-=- introduced the dynamic decay adjustment (DDA) algorithm, which works along with a constructive method to adjust the decay factor (width) of each radial basis function. Musavi et al. [19] eliminated t... |

35 |
Fuzzy Algorithms for Learning Vector Quantization.
- Karayiannis
- 1996
(Show Context)
Citation Context ...tering algorithms is frequently based on alternating optimization [3]. In contrast, LVQ algorithms map the feature vectors to the prototypes by minimizing a loss function using gradient descent [12], =-=[13]-=-. Clustering is based on the assignment of the feature vectors into clusters, which are represented by the prototypes . The certainty of the assignment of the feature vector to the th cluster is measu... |

33 |
Practical characteristics of neural network and conventional pattern classi
- Ng, Lippmann
- 1991
(Show Context)
Citation Context ...mance of GRBF neural networks was evaluated using a set of two-dimensional (2-D) vowel data formed by computing the first two formants F1 and F2 from samples of ten vowels spoken by 67 speakers [17], =-=[20]-=-. This data set has been extensively used to compare different pattern classification approaches because there is significant overlapping between the vectors corresponding to different vowels in the F... |

25 |
Evolving Space-Filling Curves to Distribute Radial Basis Functions Over an Input Space
- Whitehead, Choate
- 1994
(Show Context)
Citation Context ...ructure (GCS), a constructive method that allows RBF neural networks to grow by inserting new prototypes into positions in the feature space where the mapping needs more details. Whitehead and Choate =-=[26]-=- proposed a similar approach which evolves space-filling curves to distribute radial basis functions over the input space. Berthold and Diamond [2] introduced the dynamic decay adjustment (DDA) algori... |

21 |
The cascade-correlation learning: a projection pursuit learning perspective
- Hwang, You, et al.
(Show Context)
Citation Context ... important drawback: the number of radial basis functions is determined a priori. This leads to similar problems as the determination of the number of hidden units for multilayer neural networks [6], =-=[9]-=-, [10], [15]. Several researchers attempted to overcome this problem by determining the number and locations of the radial basis function centers using constructive and pruning methods. Fritzke [7] at... |

6 |
methodology for constructing fuzzy algorithms for learning vector quantization
- “A
- 1997
(Show Context)
Citation Context ...f clustering algorithms is frequently based on alternating optimization [3]. In contrast, LVQ algorithms map the feature vectors to the prototypes by minimizing a loss function using gradient descent =-=[12]-=-, [13]. Clustering is based on the assignment of the feature vectors into clusters, which are represented by the prototypes . The certainty of the assignment of the feature vector to the th cluster is... |

6 | Efficient Learning Algorithms for Neural Networks - Karayiannis, Venetsanopoulos - 1993 |

2 |
ALADIN: Algorithms for learning and architecture determination
- Karayiannis
- 1994
(Show Context)
Citation Context ...rtant drawback: the number of radial basis functions is determined a priori. This leads to similar problems as the determination of the number of hidden units for multilayer neural networks [6], [9], =-=[10]-=-, [15]. Several researchers attempted to overcome this problem by determining the number and locations of the radial basis function centers using constructive and pruning methods. Fritzke [7] attempte... |

2 |
descent learning of radial basis neural networks
- “Gradient
- 1997
(Show Context)
Citation Context ...rvised procedures for selecting a fixed number of radial basis function centers [18]; and 3) RBF networks employing supervised procedures for selecting a fixed number of radial basis function centers =-=[11]-=-, [24]. Although some of the RBF neural networks mentioned above are reported to be computationally efficient compared with feedforward neural networks, they have the following important drawback: the... |

1 |
Neural Networks: Learning Algorithms, Performance Evaluation, and Applications
- Artificial
- 1993
(Show Context)
Citation Context ...drawback: the number of radial basis functions is determined a priori. This leads to similar problems as the determination of the number of hidden units for multilayer neural networks [6], [9], [10], =-=[15]-=-. Several researchers attempted to overcome this problem by determining the number and locations of the radial basis function centers using constructive and pruning methods. Fritzke [7] attempted to s... |

1 |
and radial basis function networks
- “Approximation
- 1993
(Show Context)
Citation Context ...mapping in regions where data are not available. Thus learning is closely related to classical approximation techniques, such as generalized splines and regularization theory. Park and Sandberg [22], =-=[23]-=- proved that RBF networks with one hidden layer are capable of universal approximation. Under certain mild conditions on the radial basis functions, RBF networks are capable of approximating arbitrari... |