Results 11  20
of
185
A SelfOrganizing Network That Can Follow NonStationary Distributions
, 1997
"... . A new online criterion for identifying "useless" neurons of a selforganizing network is proposed. When this criterion is used in the context of the (formerly developed) growing neural gas model to guide deletions of units, the resulting method is able to closely track nonstationary distributions ..."
Abstract

Cited by 33 (0 self)
 Add to MetaCart
. A new online criterion for identifying "useless" neurons of a selforganizing network is proposed. When this criterion is used in the context of the (formerly developed) growing neural gas model to guide deletions of units, the resulting method is able to closely track nonstationary distributions. Slow changes of the distribution are handled by adaptation of existing units. Rapid changes are handled by removal of "useless" neurons and subsequent insertions of new units in other places. 1 Nonstationary data is difficult to handle : : : Nonstationary data distributions can be found in many technical, biological or economical processes. Selforganizing neural networks have rarely been considered for tracking those distributions since many of the models, e.g. the selforganizing map [6], neural gas [8], or the hypercubical map [1], use decaying adaptation parameters 1 . Once the adaptation strength has decayed, the network is "frozen" and thus unable to react to subsequent changes i...
Alternative Ways for Cluster Visualization in SelfOrganizing Maps
 In Proc. of the Workshop on SelfOrganizing Maps (WSOM97
, 1997
"... We present two enhanced visualization techniques for the selforganizing map allowing the intuitive representation of input data similarity. The general idea of both approaches is to visualize the relationship of nodes to facilitate the detection of cluster boundaries without modifying the architect ..."
Abstract

Cited by 33 (17 self)
 Add to MetaCart
We present two enhanced visualization techniques for the selforganizing map allowing the intuitive representation of input data similarity. The general idea of both approaches is to visualize the relationship of nodes to facilitate the detection of cluster boundaries without modifying the architecture or the basic training process of SOM. One approach mirrors the movement of weight vectors during the training process within a twodimensional (virtual) output space, whereas the second results in a grid of connected nodes where the intensity of the connection mirrors the similarity of the underlying data items. Both approaches can be combined to allow improved analysis of the inherent structure of highdimensional input data and an intuitive recognition of cluster boundaries without the necessity of substantial prior knowledge concerning the input patterns. 1 Introduction The selforganizing map allows the mapping of highdimensional input data onto a twodimensional output space while...
A review of dimension reduction techniques
, 1997
"... The problem of dimension reduction is introduced as a way to overcome the curse of the dimensionality when dealing with vector data in highdimensional spaces and as a modelling tool for such data. It is defined as the search for a lowdimensional manifold that embeds the highdimensional data. A cl ..."
Abstract

Cited by 31 (4 self)
 Add to MetaCart
The problem of dimension reduction is introduced as a way to overcome the curse of the dimensionality when dealing with vector data in highdimensional spaces and as a modelling tool for such data. It is defined as the search for a lowdimensional manifold that embeds the highdimensional data. A classification of dimension reduction problems is proposed. A survey of several techniques for dimension reduction is given, including principal component analysis, projection pursuit and projection pursuit regression, principal curves and methods based on topologically continuous maps, such as Kohonen’s maps or the generalised topographic mapping. Neural network implementations for several of these techniques are also reviewed, such as the projection pursuit learning network and the BCM neuron with an objective function. Several appendices complement the mathematical treatment of the main text.
Learning Controllers for Industrial Robots
, 1996
"... . One of the most significant cost factors in robotics applications is the design and development of realtime robot control software. Control theory helps when linear controllers have to be developed, but it doesn't sufficiently support the generation of nonlinear controllers, although in many cas ..."
Abstract

Cited by 27 (14 self)
 Add to MetaCart
. One of the most significant cost factors in robotics applications is the design and development of realtime robot control software. Control theory helps when linear controllers have to be developed, but it doesn't sufficiently support the generation of nonlinear controllers, although in many cases (such as in compliance control), nonlinear control is essential for achieving high performance. This paper discusses how Machine Learning has been applied to the design of (non)linear controllers. Several alternative function approximators, including Multilayer Perceptrons (MLP), Radial Basis Function Networks (RBFNs), and Fuzzy Controllers are analyzed and compared, leading to the definition of two major families: Open Field Function Function Approximators and Locally Receptive Field Function Approximators. It is shown that RBFNs and Fuzzy Controllers bear strong similarities, and that both have a symbolic interpretation. This characteristics allows for applying both symbolic and statis...
Growing Selforganizing Networks  Why?
 In ESANN’96: European Symposium on Artificial Neural Networks
, 1996
"... . The reasons to use growing selforganizing networks are investigated. First an overview of several models of this kind is given are they are related to other approaches. Then two examples are presented to illustrate the specific properties and advantages of incremental networks. In each case a non ..."
Abstract

Cited by 26 (0 self)
 Add to MetaCart
. The reasons to use growing selforganizing networks are investigated. First an overview of several models of this kind is given are they are related to other approaches. Then two examples are presented to illustrate the specific properties and advantages of incremental networks. In each case a nonincremental model is used for comparison purposes. The first example is pattern classification and compares the supervised growing neural gas model to a conventional radial basis function approach. The second example is data visualization and contrasts the growing grid model and the selforganizing feature map. 1. Introduction Growing (or incremental) network models have no predefined structure. Rather, they are generated by successive addition (and possibly occasional deletion) of elements. At first sight this makes them a lot more complicated than networks with static structure such as normal multilayerperceptrons or selforganizing maps the topology of which is chosen a priori and do...
Kohonen Feature Maps and Growing Cell Structures  a Performance Comparison
 Advances in Neural Information Processing Systems 5
, 1993
"... A performance comparison of two selforganizing networks, the Kohonen Feature Map and the recently proposed Growing Cell Structures is made. For this purpose several performance criteria for selforganizing networks are proposed and motivated. The models are tested with three example problems of inc ..."
Abstract

Cited by 26 (3 self)
 Add to MetaCart
A performance comparison of two selforganizing networks, the Kohonen Feature Map and the recently proposed Growing Cell Structures is made. For this purpose several performance criteria for selforganizing networks are proposed and motivated. The models are tested with three example problems of increasing difficulty. The Kohonen Feature Map demonstrates slightly superior results only for the simplest problem. For the other more difficult and also more realistic problems the Growing Cell Structures exhibit significantly better performance by every criterion. Additional advantages of the new model are that all parameters are constant over time and that size as well as structure of the network are determined automatically. 1 INTRODUCTION Selforganizing networks are able to generate interesting lowdimensional representations of highdimensional input data. The most wellknown of these models is the Kohonen Feature Map (Kohonen [1982]). So far it has been applied to a large variety of ...
Constructive Feedforward Neural Networks for Regression Problems: A Survey
, 1995
"... In this paper, we review the procedures for constructing feedforward neural networks in regression problems. While standard backpropagation performs gradient descent only in the weight space of a network with fixed topology, constructive procedures start with a small network and then grow additiona ..."
Abstract

Cited by 21 (0 self)
 Add to MetaCart
In this paper, we review the procedures for constructing feedforward neural networks in regression problems. While standard backpropagation performs gradient descent only in the weight space of a network with fixed topology, constructive procedures start with a small network and then grow additional hidden units and weights until a satisfactory solution is found. The constructive procedures are categorized according to the resultant network architecture and the learning algorithm for the network weights. The Hong Kong University of Science & Technology Technical Report Series Department of Computer Science 1 Introduction In recent years, many neural network models have been proposed for pattern classification, function approximation and regression problems. Among them, the class of multilayer feedforward networks is perhaps the most popular. Standard backpropagation performs gradient descent only in the weight space of a network with fixed topology; this approach is analogous to ...
The LBGU method for vector quantization  an improvement over LBG inspired from neural networks
, 1997
"... A new vector quantization method  denoted LBGU  is presented which is closely related to a particular class of neural network models (growing selforganizing networks). LBGU consists mainly of repeated runs of the wellknown LBG algorithm. Each time LBG has converged, however, a novel measure ..."
Abstract

Cited by 21 (1 self)
 Add to MetaCart
A new vector quantization method  denoted LBGU  is presented which is closely related to a particular class of neural network models (growing selforganizing networks). LBGU consists mainly of repeated runs of the wellknown LBG algorithm. Each time LBG has converged, however, a novel measure of utility is assigned to each codebook vector. Thereafter, the vector with minimum utility is moved to a new location, LBG is run on the resulting modified codebook until convergence, another vector is moved, and so on. Since a strictly monotonous improvement of the LBGgenerated codebooks is enforced, it can be proved that LBGU terminates in a finite number of steps. Experiments with artificial data demonstrate significant improvements in terms of RMSE over LBG combined with only modestly higher computational costs.
Exploration of Document Collections with SelfOrganizing Maps: A Novel Approach to Similarity Representation
 In Proceedings of the European Symposium on Principles of Data Mining and Knowledge Discovery (PKDD'97
, 1997
"... . Classification is one of the central issues in any system dealing with text data. The need for effective approaches is dramatically increased nowadays due to the advent of massive digital libraries containing freeform documents. What we are looking for are powerful methods for the exploration of ..."
Abstract

Cited by 20 (9 self)
 Add to MetaCart
. Classification is one of the central issues in any system dealing with text data. The need for effective approaches is dramatically increased nowadays due to the advent of massive digital libraries containing freeform documents. What we are looking for are powerful methods for the exploration of such libraries whereby the detection of similarities between the various text documents is the overall goal. In other words, methods that may be used to gain insight in the inherent structure of the various items contained in a text archive are needed. In this paper we demonstrate the applicability of selforganizing maps, a neural network model adhering to the unsupervised learning paradigm, for the task of text document clustering. In order to improve the representation of the result we present an extension to the basic learning rule that captures the movement of the various weight vectors in a twodimensional output space for convenient visual inspection. The result of the extended traini...
Neural Maps and Topographic Vector Quantization
, 1999
"... Neural maps combine the representation of data by codebook vectors, like a vector quantizer, with the property of topography, like a continuous function. While the quantization error is simple to compute and to compare between different maps, topography of a map is difficult to define and to quantif ..."
Abstract

Cited by 20 (4 self)
 Add to MetaCart
Neural maps combine the representation of data by codebook vectors, like a vector quantizer, with the property of topography, like a continuous function. While the quantization error is simple to compute and to compare between different maps, topography of a map is difficult to define and to quantify. Yet, topography of a neural map is an advantageous property, e.g. in the presence of noise in a transmission channel, in data visualization, and in numerous other applications. In this paper we review some conceptual aspects of definitions of topography, and some recently proposed measures to quantify topography. We apply the measures first to neural maps trained on synthetic data sets, and check the measures for properties like reproducability, scalability, systematic dependence of the value of the measure on the topology of the map etc. We then test the measures on maps generated for four realworld data sets, a chaotic time series, speech data, and two sets of image data. The measures ...