Results 1  10
of
79
A Survey of Fuzzy Clustering Algorithms for Pattern Recognition  Part 11
"... the concepts of fuzzy clustering and soft competitive learning in clustering algorithms is proposed on the basis of the existing literature. Moreover, a set of functional attributes is selected for use as dictionary entries in the comparison of clustering algorithms. In this paper, five clustering a ..."
Abstract

Cited by 67 (2 self)
 Add to MetaCart
the concepts of fuzzy clustering and soft competitive learning in clustering algorithms is proposed on the basis of the existing literature. Moreover, a set of functional attributes is selected for use as dictionary entries in the comparison of clustering algorithms. In this paper, five clustering algorithms taken from the literature are reviewed, assessed and compared on the basis of the selected properties of interest. These clustering models are 1) selforganizing map (SOM); 2) fuzzy learning vector quantization (FLVQ); 3) fuzzy adaptive resonance theory (fuzzy ART); 4) growing neural gas (GNG); 5) fully selforganizing simplified adaptive resonance theory (FOSART). Although our theoretical comparison is fairly simple, it yields observations that may appear parodoxical. First, only FLVQ, fuzzy ART, and FOSART exploit concepts derived from fuzzy set theory (e.g., relative and/or absolute fuzzy membership functions). Secondly, only SOM, FLVQ, GNG, and FOSART employ soft competitive learning mechanisms, which are affected by asymptotic misbehaviors in the case of FLVQ, i.e., only SOM, GNG, and FOSART are considered effective fuzzy clustering algorithms. Index Terms—Ecological net, fuzzy clustering, modular architecture, relative and absolute membership function, soft and hard competitive learning, topologically correct mapping. I.
Vector Quantization with Complexity Costs
, 1993
"... Vector quantization is a data compression method where a set of data points is encoded by a reduced set of reference vectors, the codebook. We discuss a vector quantization strategy which jointly optimizes distortion errors and the codebook complexity, thereby, determining the size of the codebook. ..."
Abstract

Cited by 59 (19 self)
 Add to MetaCart
Vector quantization is a data compression method where a set of data points is encoded by a reduced set of reference vectors, the codebook. We discuss a vector quantization strategy which jointly optimizes distortion errors and the codebook complexity, thereby, determining the size of the codebook. A maximum entropy estimation of the cost function yields an optimal number of reference vectors, their positions and their assignment probabilities. The dependence of the codebook density on the data density for different complexity functions is investigated in the limit of asymptotic quantization levels. How different complexity measures influence the efficiency of vector quantizers is studied for the task of image compression, i.e., we quantize the wavelet coefficients of gray level images and measure the reconstruction error. Our approach establishes a unifying framework for different quantization methods like Kmeans clustering and its fuzzy version, entropy constrained vector quantizati...
Energy Functions for SelfOrganizing Maps
, 1999
"... This paper is about the last issue. After people started to realize that there is no energy function for the Kohonen learning rule (in the continuous case), many attempts have been made to change the algorithm such that an energy can be defined, without drastically changing its properties. Here we w ..."
Abstract

Cited by 54 (1 self)
 Add to MetaCart
This paper is about the last issue. After people started to realize that there is no energy function for the Kohonen learning rule (in the continuous case), many attempts have been made to change the algorithm such that an energy can be defined, without drastically changing its properties. Here we will review a simple suggestion, which has been proposed 2 and generalized in several different contexts. The advantage over some other attempts is its simplicity: we only need to redefine the determination of the winning ("best matching") unit. The energy function and corresponding learning algorithm are introduced in Section 2. We give two proofs that there is indeed a proper energy function. The first one, in Section 3, is based on explicit computation of derivatives. The second one, in Section 4 follows from a limiting case of a more general (free) energy function derived in a probabilistic setting. The energy formalism allows for a direct interpretation of disordered configurations in terms of local minima, two examples of which are treated in Section 5.
Controling the Magnification Factor of SelfOrganizing Feature Maps
, 1995
"... The magnification exponents ¯ occuring in adaptive map formation algorithms like Kohonen's selforganizing feature map deviate for the information theoretically optimal value ¯ = 1 as well as from the values which optimize, e.g., the mean square distortion error (¯ = 1=3 for onedimensional map ..."
Abstract

Cited by 52 (7 self)
 Add to MetaCart
The magnification exponents ¯ occuring in adaptive map formation algorithms like Kohonen's selforganizing feature map deviate for the information theoretically optimal value ¯ = 1 as well as from the values which optimize, e.g., the mean square distortion error (¯ = 1=3 for onedimensional maps). At the same time, models for categorical perception such as the "perceptual magnet" effect which are based on topographic maps require negative magnification exponents ¯ ! 0. We present an extension of the selforganizing feature map algorithm which utilizes adaptive local learning step sizes to actually control the magnification properties of the map. By change of a single parameter, maps with optimal information transfer, with various minimal reconstruction errors, or with an inverted magnification can be generated. Analytic results on this new algorithm are complemented by numerical simulations. 1. Introduction The representation of information in topographic maps is a common property of...
Dynamic selforganizing maps with controlled growth for knowledge discovery
 IEEE Transactions on Neural Networks
, 2000
"... Abstract—The growing selforganizing map (GSOM) has been presented as an extended version of the selforganizing map (SOM), which has significant advantages for knowledge discovery applications. In this paper, the GSOM algorithm is presented in detail and the effect of a spread factor, which can be ..."
Abstract

Cited by 47 (2 self)
 Add to MetaCart
(Show Context)
Abstract—The growing selforganizing map (GSOM) has been presented as an extended version of the selforganizing map (SOM), which has significant advantages for knowledge discovery applications. In this paper, the GSOM algorithm is presented in detail and the effect of a spread factor, which can be used to measure and control the spread of the GSOM, is investigated. The spread factor is independent of the dimensionality of the data and as such can be used as a controlling measure for generating maps with different dimensionality, which can then be compared and analyzed with better accuracy. The spread factor is also presented as a method of achieving hierarchical clustering of a data set with the GSOM. Such hierarchical clustering allows the data analyst to identify significant and interesting clusters at a higher level of the hierarchy, and as such continue with finer clustering of only the interesting clusters. Therefore, only a small map is created in the beginning with a low spread factor, which can be generated for even a very large data set. Further analysis is conducted on selected sections of the data and as such of smaller volume. Therefore, this method facilitates the analysis of even very large data sets. Index Terms—Clustering methods, heirarchical systems, knowledge discovery, neural networks, selforganizing feature maps, unsupervised learning. I.
SelfOrganizing Maps on noneuclidean Spaces
 Kohonen Maps
, 1999
"... INTRODUCTION The SelfOrganizing Map, as introduced by Kohonen more than a decade ago, has stimulated an enormous body of work in a broad range of applied and theoretical fields, including pattern recognition, brain theory, biological modeling, mathematics, signal processing, data mining and many m ..."
Abstract

Cited by 33 (4 self)
 Add to MetaCart
(Show Context)
INTRODUCTION The SelfOrganizing Map, as introduced by Kohonen more than a decade ago, has stimulated an enormous body of work in a broad range of applied and theoretical fields, including pattern recognition, brain theory, biological modeling, mathematics, signal processing, data mining and many more [8]. Much of this impressive success is owed to the combination of elegant simplicity in the SOM's algorithmic formulation, together with a high ability to produce useful answers for a wide variety of applied data processing tasks and even to provide a good model of important aspects of structure formation processes in neural systems. While the applications of the SOM are extremely widespread, the majority of uses still follow the original motivation of the SOM: to create dimensionreduced "feature maps" for various uses, most prominently perhaps for the purpose of data visualization. The suitability of the SOM for this task has been analyzed in great detail and linked to earlier
Learning Control of Robot Manipulators
, 1993
"... Learning control encompasses a class of control algorithms for programmable machines such as robots which attain, through an iterative process, the motor dexterity that enables the machine to execute complex tasks. In this paper we discuss the use of function identification and adaptive control algo ..."
Abstract

Cited by 33 (1 self)
 Add to MetaCart
Learning control encompasses a class of control algorithms for programmable machines such as robots which attain, through an iterative process, the motor dexterity that enables the machine to execute complex tasks. In this paper we discuss the use of function identification and adaptive control algorithms in learning controllers for robot manipulators. In particular, we discuss the similarities and differences between betterment learning schemes, repetitive controllers and adaptive learning schemes based on integral transforms. The stability and convergence properties of adaptive learning algorithms based on integral transforms are highlighted and experimental results illustrating some of these properties are presented. Key words: Learning control, adaptive control, repetitive control, robotics. 1 Introduction The emulation of human learning has long been among the most sought after and elusive goals in robotics and artificial intelligence. Many aspects of human learning are still not...
Rapid Learning with Parametrized SelfOrganizing Maps
 Neurocomputing
, 1995
"... The construction of computer vision and robot control algorithms from training data is a challenging application for artificial neural networks. However, many practical applications require an approach that is workable with a small number of data examples. In this contribution, we describe results o ..."
Abstract

Cited by 33 (17 self)
 Add to MetaCart
The construction of computer vision and robot control algorithms from training data is a challenging application for artificial neural networks. However, many practical applications require an approach that is workable with a small number of data examples. In this contribution, we describe results on the use of "Parametrized Selforganizing Maps" ("PSOMs") with this goal in mind. We report results that demonstrate that a small number of labeled training images is sufficient to construct PSOMs to identify the position of finger tips in images of 3Dhand shapes to within an accuracy of only a few pixel locations. Further we present a framework of hierarchical PSOMs that allows rapid "oneshot learning" after acquiring a number of "basis mappings" during a previous "investment learning stage". We demonstrate the potential of this approach with the task of constructing the positiondependent mapping from camera coordinates to the work space coordinates of a Puma robot. 1 Introduction Lear...
Principal surfaces from unsupervised kernel regression
 IEEE Transactions on Pattern Analysis and Machine Intelligence
, 2005
"... Abstract—We propose a nonparametric approach to learning of principal surfaces based on an unsupervised formulation of the NadarayaWatson kernel regression estimator. As compared with previous approaches to principal curves and surfaces, the new method offers several advantages: First, it provides ..."
Abstract

Cited by 22 (13 self)
 Add to MetaCart
(Show Context)
Abstract—We propose a nonparametric approach to learning of principal surfaces based on an unsupervised formulation of the NadarayaWatson kernel regression estimator. As compared with previous approaches to principal curves and surfaces, the new method offers several advantages: First, it provides a practical solution to the model selection problem because all parameters can be estimated by leaveoneout crossvalidation without additional computational cost. In addition, our approach allows for a convenient incorporation of nonlinear spectral methods for parameter initialization, beyond classical initializations based on linear PCA. Furthermore, it shows a simple way to fit principal surfaces in general feature spaces, beyond the usual data space setup. The experimental results illustrate these convenient features on simulated and real data. Index Terms—Dimensionality reduction, principal curves, principal surfaces, density estimation, model selection, kernel methods. æ 1
Neural Maps and Topographic Vector Quantization
, 1999
"... Neural maps combine the representation of data by codebook vectors, like a vector quantizer, with the property of topography, like a continuous function. While the quantization error is simple to compute and to compare between different maps, topography of a map is difficult to define and to quantif ..."
Abstract

Cited by 22 (4 self)
 Add to MetaCart
(Show Context)
Neural maps combine the representation of data by codebook vectors, like a vector quantizer, with the property of topography, like a continuous function. While the quantization error is simple to compute and to compare between different maps, topography of a map is difficult to define and to quantify. Yet, topography of a neural map is an advantageous property, e.g. in the presence of noise in a transmission channel, in data visualization, and in numerous other applications. In this paper we review some conceptual aspects of definitions of topography, and some recently proposed measures to quantify topography. We apply the measures first to neural maps trained on synthetic data sets, and check the measures for properties like reproducability, scalability, systematic dependence of the value of the measure on the topology of the map etc. We then test the measures on maps generated for four realworld data sets, a chaotic time series, speech data, and two sets of image data. The measures ...