Results 1  10
of
25
Deterministic Annealing for Clustering, Compression, Classification, Regression, and Related Optimization Problems
 Proceedings of the IEEE
, 1998
"... this paper. Let us place it within the neural network perspective, and particularly that of learning. The area of neural networks has greatly benefited from its unique position at the crossroads of several diverse scientific and engineering disciplines including statistics and probability theory, ph ..."
Abstract

Cited by 248 (11 self)
 Add to MetaCart
this paper. Let us place it within the neural network perspective, and particularly that of learning. The area of neural networks has greatly benefited from its unique position at the crossroads of several diverse scientific and engineering disciplines including statistics and probability theory, physics, biology, control and signal processing, information theory, complexity theory, and psychology (see [45]). Neural networks have provided a fertile soil for the infusion (and occasionally confusion) of ideas, as well as a meeting ground for comparing viewpoints, sharing tools, and renovating approaches. It is within the illdefined boundaries of the field of neural networks that researchers in traditionally distant fields have come to the realization that they have been attacking fundamentally similar optimization problems.
Object indexing using an iconic sparse distributed memory
, 1995
"... A generalpurpose object indexing technique is described that combines the virtues of principal component analysis with the favorable matching properties of highdimensional spaces to achieve high precision recognition. An object is represented by a set of highdimensional iconic feature vectors com ..."
Abstract

Cited by 60 (9 self)
 Add to MetaCart
A generalpurpose object indexing technique is described that combines the virtues of principal component analysis with the favorable matching properties of highdimensional spaces to achieve high precision recognition. An object is represented by a set of highdimensional iconic feature vectors comprised of the responses of derivative of Gaussian filters at a range of orientations and scales. Since these filters can be shown to form the eigenvectors of arbitrary images containing both natural and manmade structures, they are wellsuited for indexing in disparate domains. The indexing algorithm uses an active vision system in conjunction with a modified form of Kanerva’s sparse distributed memory which facilitates interpolation between views and provides a convenient platform for learning the association between an object’s appearance and its identity. The robustness of the indexing method was experimentally confirmed by subjecting the method to a range of viewing conditions and the accuracy was verified using a wellknown model database containing a number of complex 3D objects under varying pose. 1
Median Radial Basis Functions Neural Network
 IEEE Trans. on Neural Networks
, 1996
"... Radial Basis Functions (RBF) consists of a twolayer neural network, where each hidden unit implements a kernel function. Each kernel is associated with an activation region from the input space and its output is fed to an output unit. In order to find the parameters of a neural network which embeds ..."
Abstract

Cited by 28 (15 self)
 Add to MetaCart
Radial Basis Functions (RBF) consists of a twolayer neural network, where each hidden unit implements a kernel function. Each kernel is associated with an activation region from the input space and its output is fed to an output unit. In order to find the parameters of a neural network which embeds this structure we take into consideration two different statistical approaches. The first approach uses classical estimation in the learning stage and it is based on the learning vector quantization algorithm and its second order statistics extension. After the presentation of this approach, we introduce the Median Radial Basis Functions (MRBF) algorithm based on robust estimation of the hidden unit parameters. The proposed algorithm employs the marginal median for kernel location estimation and the median of the absolute deviations for the scale parameter estimation. A histogrambased fast implementation is provided for the MRBF algorithm. The theoretical performance of the two training al...
The Minimax Distortion Redundancy in Empirical Quantizer Design
 IEEE TRANSACTIONS ON INFORMATION THEORY
, 1997
"... We obtain minimax lower and upper bounds for the expected distortion redundancy of empirically designed vector quantizers. We show that the mean squared distortion of a vector quantizer designed from n i.i.d. data points using any design algorithm is at least\Omega i n \Gamma1=2 j away from the ..."
Abstract

Cited by 26 (6 self)
 Add to MetaCart
We obtain minimax lower and upper bounds for the expected distortion redundancy of empirically designed vector quantizers. We show that the mean squared distortion of a vector quantizer designed from n i.i.d. data points using any design algorithm is at least\Omega i n \Gamma1=2 j away from the optimal distortion for some distribution on a bounded subset of R d . Together with existing upper bounds this result shows that the minimax distortion redundancy for empirical quantizer design, as a function of the size of the training data, is asymptotically on the order of n 1=2 . We also derive a new upper bound for the performance of the empirically optimal quantizer.
SelfOrganizing Maps, Vector Quantization, and Mixture Modeling
 IEEE Transactions on Neural Networks
, 2001
"... Selforganizing maps are popular algorithms for unsupervised learning and data visualization. Exploiting the link between vector quantization and mixture modeling, we derive EM algorithms for selforganizing maps with and without missing values. We compare selforganizing maps with the elasticnet ..."
Abstract

Cited by 25 (0 self)
 Add to MetaCart
Selforganizing maps are popular algorithms for unsupervised learning and data visualization. Exploiting the link between vector quantization and mixture modeling, we derive EM algorithms for selforganizing maps with and without missing values. We compare selforganizing maps with the elasticnet approach and explain why the former is better suited for the visualization of highdimensional data. Several extensions and improvements are discussed. As an illustration we apply a selforganizing map based on a multinomial distribution to market basket analysis. I. Introduction Selforganizing maps are popular tools for clustering and visualization of highdimensional data [1], [2]. The wellknown Kohonen learning algorithm can be interpreted as a variant of vector quantization with additional lateral interactions [3], [4]. The addition of lateral interaction between units introduces a sense of topology, such that neighboring units represent inputs that are close together in input space [...
Multipleprototype classifier design
 IEEE Trans. Syst., Man, Cybern. B
, 1998
"... Abstract—Five methods that generate multiple prototypes from labeled data are reviewed. Then we introduce a new sixth approach, which is a modification of Chang’s method. We compare the six methods with two standard classifier designs: the 1nearest prototype (1np) and 1nearest neighbor (1nn) rul ..."
Abstract

Cited by 24 (0 self)
 Add to MetaCart
Abstract—Five methods that generate multiple prototypes from labeled data are reviewed. Then we introduce a new sixth approach, which is a modification of Chang’s method. We compare the six methods with two standard classifier designs: the 1nearest prototype (1np) and 1nearest neighbor (1nn) rules. The standard of comparison is the resubstitution error rate; the data used are the Iris data. Our modified Chang’s method produces the best consistent (zero errors) design. One of the competitive learning models produces the best minimal prototypes design (five prototypes that yield three resubstitution errors). Index Terms — Competitive learning, Iris data, modified Chang’s method (MCA), multiple prototypes, nearest neighbor
Order Statistics Learning Vector Quantizer
 IEEE Trans. on Image Processing
, 1995
"... In this correspondence, we propose a novel class of Learning Vector Quantizers (LVQs) based on multivariate data ordering principles. A special case of the novel LVQ class is the Median LVQ, which uses either the marginal median or the vector median as a multivariate estimator of location. The perfo ..."
Abstract

Cited by 15 (11 self)
 Add to MetaCart
In this correspondence, we propose a novel class of Learning Vector Quantizers (LVQs) based on multivariate data ordering principles. A special case of the novel LVQ class is the Median LVQ, which uses either the marginal median or the vector median as a multivariate estimator of location. The performance of the proposed marginal median LVQ in color image quantization is demonstrated by experiments. 1 Introduction Neural networks (NN) [1, 2] is a rapidly expanding research field which attracted the attention of scientists and engineers in the last decade. A large variety of artificial neural networks has been developed based on a multitude of learning techniques and having different topologies [2]. One prominent example of neural networks is the Learning Vector Quantizer (LVQ). It is an autoassociative nearestneighbor classifier which classifies arbitrary patterns into classes using an error correction encoding procedure related to competitive learning [1]. In order to make a distinct...
Learning Navigational Behaviors using a Predictive Sparse Distributed Memory
, 1996
"... We describe a general framework for the acquisition of perceptionbased navigational behaviors in autonomous mobile robots. A selforganizing sparse distributed memory equivalent to a threelayered neural network is used to learn the desired transfer function mapping sensory input into motor command ..."
Abstract

Cited by 14 (1 self)
 Add to MetaCart
We describe a general framework for the acquisition of perceptionbased navigational behaviors in autonomous mobile robots. A selforganizing sparse distributed memory equivalent to a threelayered neural network is used to learn the desired transfer function mapping sensory input into motor commands. The memory is initially trained by teleoperating the robot on a small number of paths within a given domain of interest. During training,the vectors in the sensory space as well as the motor space are continually adapted using a form of competitive learning to yield basis vectors aimed at efficiently spanning the sensorimotor space. After training, the robot navigates from arbitrary locations to a desired goal location using motor output vectors computed by a saliencybased weighted averaging scheme. The pervasive problem of perceptual aliasing in nonMarkov environments is handled by allowing both current as well as the set of immediately preceding perceptual inputs to predict the motor output vector for the current time instant. Simulation results obtained for a mobile robot, equipped with simple photoreceptors and infrared receivers, navigating within an enclosed obstacleridden arena indicate that the method performs successfully in a variety of navigational tasks, some of which exhibit substantial perceptual aliasing.
Eye Movements in Visual Cognition: A Computational Study
, 1997
"... Visual cognition depends critically on the momenttomoment orientation of gaze. Gaze is changed by saccades, rapid eye movements that orient the fovea over targets of interest in a visual scene. Saccades are ballistic; a prespecified target location is computed prior to the movement and visual feed ..."
Abstract

Cited by 12 (0 self)
 Add to MetaCart
Visual cognition depends critically on the momenttomoment orientation of gaze. Gaze is changed by saccades, rapid eye movements that orient the fovea over targets of interest in a visual scene. Saccades are ballistic; a prespecified target location is computed prior to the movement and visual feedback is precluded. Once a target is fixated, gaze is typically held for about 300 milliseconds, although it can be held for both longer and shorter intervals. Despite these distinctive properties, there has been no specific computational model of the gaze targeting strategy employed by the human visual system during visual cognitive tasks. This paper proposes such a model that uses iconic scene representations derived from oriented spatiochromatic filters at multiple scales. Visual search for a target object proceeds in a coarsetofine fashion with the target's largest scale filter responses being compared first. Taskrelevant target locations are represented as saliency maps which are used...
Diffusion approximation of frequency sensitive competitive learning
 IEEE TRANS. NEURAL NETWORKS
, 1997
"... The focus of this paper is a convergence study of the frequency sensitive competitive learning (FSCL) algorithm. We approximate the final phase of FSCL learning by a diffusion process described by a Fokker–Plank equation. Sufficient and necessary conditions are presented for the convergence of the ..."
Abstract

Cited by 10 (0 self)
 Add to MetaCart
The focus of this paper is a convergence study of the frequency sensitive competitive learning (FSCL) algorithm. We approximate the final phase of FSCL learning by a diffusion process described by a Fokker–Plank equation. Sufficient and necessary conditions are presented for the convergence of the diffusion process to a local equilibrium. The analysis parallels that by Ritter and Schulten for Kohonen’s selforganizing map (SOM). We show that the convergence conditions involve only the learning rate and that they are the same as the conditions for weak convergence described previously. Our analysis thus broadens the class of algorithms that have been shown to have these types of convergence characteristics.