Results 1 
4 of
4
Growing Cell Structures  A Selforganizing Network for Unsupervised and Supervised Learning
 Neural Networks
, 1993
"... We present a new selforganizing neural network model having two variants. The first variant performs unsupervised learning and can be used for data visualization, clustering, and vector quantization. The main advantage over existing approaches, e.g., the Kohonen feature map, is the ability of the m ..."
Abstract

Cited by 295 (11 self)
 Add to MetaCart
(Show Context)
We present a new selforganizing neural network model having two variants. The first variant performs unsupervised learning and can be used for data visualization, clustering, and vector quantization. The main advantage over existing approaches, e.g., the Kohonen feature map, is the ability of the model to automatically find a suitable network structure and size. This is achieved through a controlled growth process which also includes occasional removal of units. The second variant of the model is a supervised learning method which results from the combination of the abovementioned selforganizing network with the radial basis function (RBF) approach. In this model it is possible  in contrast to earlier approaches  to perform the positioning of the RBF units and the supervised training of the weights in parallel. Therefore, the current classification error can be used to determine where to insert new RBF units. This leads to small networks which generalize very well. Results on the t...
A Growing And Splitting Elastic Network For Vector Quantization
 Proc. IEEENNSP
, 1993
"... A new vector quantization method is proposed which incrementally generates a suitable codebook. During the generation process new vectors are inserted in areas of the input vector space where the quantization error is especially high. A onedimensional topological neighborhood makes it possible to in ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
(Show Context)
A new vector quantization method is proposed which incrementally generates a suitable codebook. During the generation process new vectors are inserted in areas of the input vector space where the quantization error is especially high. A onedimensional topological neighborhood makes it possible to interpolate new vectors from existing ones. Vectors not contributing to error minimization are removed. After the desired number of vectors is reached, a stochastic approximation phase fine tunes the codebook. The final quality of the codebooks is exceptional. A comparison with two wellknown methods for vector quantization was performed by solving an image compression problem. The results indicate that the new method is clearly superior to both other approaches. INTRODUCTION The huge amount of data in many technical applications (e. g. HDTV) makes it necessary to compress the original data before sending them over limited bandwidth channels. Usually correlations and structure in the data are...
Outlier management in intelligent data analysis
, 2000
"... In spite of many statistical methods for outlier detection and for robust analysis, there is little work on further analysis of outliers themselves to determine their origins. For example, there are “good ” outliers that provide useful information that can lead to the discovery of new knowledge, or ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
(Show Context)
In spite of many statistical methods for outlier detection and for robust analysis, there is little work on further analysis of outliers themselves to determine their origins. For example, there are “good ” outliers that provide useful information that can lead to the discovery of new knowledge, or “bad ” outliers that include noisy data points. Successfully distinguishing between different types of outliers is an important issue in many applications, including fraud detection, medical tests, process analysis and scientific discovery. It requires not only an understanding of the mathematical properties of data but also relevant knowledge in the domain context in which the outliers occur. This thesis presents a novel attempt in automating the use of domain knowledge in helping distinguish between different types of outliers. Two complementary knowledgebased outlier analysis strategies are proposed: one using knowledge regarding how “normal data ” should be distributed in a domain of interest in order to identify “good ” outliers, and the other using the understanding of “bad ” outliers. This kind of knowledgebased outlier analysis is a useful extension to existing work in both statistical and computing communities on outlier detection.
String Tightening as a SelfOrganizing Phenomenon Abstract — The
"... phenomenon of selforganization has been of special interest to the neural network community throughout the last couple of decades. In this paper, we study a variant of the SelfOrganizing Map (SOM) that models the phenomenon of selforganization of the particles forming a string when the string is ..."
Abstract
 Add to MetaCart
phenomenon of selforganization has been of special interest to the neural network community throughout the last couple of decades. In this paper, we study a variant of the SelfOrganizing Map (SOM) that models the phenomenon of selforganization of the particles forming a string when the string is tightened from one or both of its ends. The proposed variant, called the String Tightening SelfOrganizing Neural Network (STON), can be used to solve certain practical problems, such as computation of shortest homotopic paths, smoothing paths to avoid sharp turns, computation of convex hull, etc. These problems are of considerable interest in computational geometry, robotics pathplanning, AI (diagrammatic reasoning), VLSI routing, and geographical information systems. Given a set of obstacles and a string with two fixed terminal points in a two dimensional space, the STON model continuously tightens the given string until the unique shortest configuration in terms of the Euclidean metric is reached. The STON minimizes the total length of a string on convergence by dynamically creating and selecting feature vectors in a competitive manner. Proof of correctness of this anytime algorithm and experimental results obtained by its deployment have been presented in the paper.