Results 11  20
of
113
Performance evaluation of pattern classifiers for handwritten character recognition
 International Journal on Document Analysis and Recognition
, 2002
"... Abstract. This paper describes a performance evaluation study in which some efficient classifiers are tested in handwritten digit recognition. The evaluated classifiers include a statistical classifier (modified quadratic discriminant function, MQDF), three neural classifiers, and an LVQ (learning v ..."
Abstract

Cited by 35 (3 self)
 Add to MetaCart
(Show Context)
Abstract. This paper describes a performance evaluation study in which some efficient classifiers are tested in handwritten digit recognition. The evaluated classifiers include a statistical classifier (modified quadratic discriminant function, MQDF), three neural classifiers, and an LVQ (learning vector quantization) classifier. They are efficient in that high accuracies can be achieved at moderate memory space and computation cost. The performance is measured in terms of classification accuracy, sensitivity to training sample size, ambiguity rejection, and outlier resistance. The outlier resistance of neural classifiers is enhanced by training with synthesized outlier data. The classifiers are tested on a large data set extracted from NIST SD19. As results, the test accuracies of the evaluated classifiers are comparable to or higher than those of the nearest neighbor (1NN) rule and regularized discriminant analysis (RDA). It is shown that neural classifiers are more susceptible to small sample size than MQDF, although they yield higher accuracies on large sample size. As a neural classifier, the polynomial classifier (PC) gives the highest accuracy and performs best in ambiguity rejection. On the other hand, MQDF is superior in outlier rejection even though it is not trained with outlier data. The results indicate that pattern classifiers have complementary advantages and they should be appropriately combined to achieve higher performance.
Bayes Risk Weighted Vector Quantization With Posterior Estimation for Image Compression and Classification
 IEEE TRANSACTIONS ON IMAGE PROCESSING
, 1996
"... Classification and compression play important roles in communicating digital information. Their combination is useful in many applications, including the detection of abnormalities in compressed medical images. In view of the similarities of compression and lowlevel classification, it is not surpri ..."
Abstract

Cited by 35 (12 self)
 Add to MetaCart
(Show Context)
Classification and compression play important roles in communicating digital information. Their combination is useful in many applications, including the detection of abnormalities in compressed medical images. In view of the similarities of compression and lowlevel classification, it is not surprising that there are many similar methods for their design. Because some of these methods are useful for designing vector quantizers, it seems natural that vector quantization (VQ) is explored for the combined goal. We investigate several VQbased algorithms that seek to minimize both the distortion of compressed images and errors in classifying their pixel blocks. These algorithms are investigated with both full search and treestructured codes. We emphasize a nonparametric technique that minimizes both error measures simultaneously by incorporating a Bayes risk component into the distortion measure used for design and encoding. We introduce a treestructured posterior estimator to produce t...
Online learning and stochastic approximations
 In Online Learning in Neural Networks
, 1998
"... The convergence of online learning algorithms is analyzed using the tools of the stochastic approximation theory, and proved under very weak conditions. A general framework for online learning algorithms is first presented. This framework encompasses the most common online learning algorithms in use ..."
Abstract

Cited by 34 (0 self)
 Add to MetaCart
The convergence of online learning algorithms is analyzed using the tools of the stochastic approximation theory, and proved under very weak conditions. A general framework for online learning algorithms is first presented. This framework encompasses the most common online learning algorithms in use today, as illustrated by several examples. The stochastic approximation theory then provides general results describing the convergence of all these learning algorithms at once.
A neural network based hybrid system for detection, characterization and classification of shortduration oceanic signals
 IEEE Journal of Ocean Engineering
, 1992
"... AbstractAutomated identification and classification of shortduration oceanic signals obtained from passive sonar is a complex problem because of the large variability in both temporal and spectral characteristics even in signals obtained from the same source. This paper presents the design and eva ..."
Abstract

Cited by 32 (19 self)
 Add to MetaCart
(Show Context)
AbstractAutomated identification and classification of shortduration oceanic signals obtained from passive sonar is a complex problem because of the large variability in both temporal and spectral characteristics even in signals obtained from the same source. This paper presents the design and evaluation of a comprehensive classifier system for such signals. We first highlight the importance of selecting appropriate signal descriptors or feature vectors for highquality classification of realistic shortduration oceanic signals. Waveletbased feature extractors are shown to be superior to the more commonly used autoregressive coefficients and power spectral coefficients for this purpose. A variety of static neural network classifiers are evaluated and compared favorably with traditional statistical techniques for signal classification. We concentrate on those networks that are able to time out irrelevant input features and are less susceptible to noisy inputs, and introduce two new neuralnetwork based classifiers. Methods for combining the outputs of several classifiers to yield a more accurate labeling are proposed and evaluated based on the interpretation of network outputs as approximating posterior class probabilities. These methods lead to higher classification accuracy and also provide a mechanism for recognizing deviant signals and false alarms. Performance results are given for signals in the DARPA standard data set I. KeywordsNeural networks, pattern classification, passive sonar, shortduration oceanic signals, feature extraction, evidence combination. S I.
Dynamics and generalization ability of LVQ algorithms
 Journal of Machine Learning Research
, 2006
"... Learning vector quantization (LVQ) schemes constitute intuitive, powerful classification heuristics with numerous successful applications but, so far, limited theoretical background. We study LVQ rigorously within a simplifying model situation: two competing prototypes are trained from a sequence of ..."
Abstract

Cited by 30 (16 self)
 Add to MetaCart
(Show Context)
Learning vector quantization (LVQ) schemes constitute intuitive, powerful classification heuristics with numerous successful applications but, so far, limited theoretical background. We study LVQ rigorously within a simplifying model situation: two competing prototypes are trained from a sequence of examples drawn from a mixture of Gaussians. Concepts from statistical physics and the theory of online learning allow for an exact description of the training dynamics in highdimensional feature space. The analysis yields typical learning curves, convergence properties, and achievable generalization abilities. This is also possible for heuristic training schemes which do not relate to a cost function. We compare the performance of several algorithms, including Kohonen’s LVQ1 and LVQ+/, a limiting case of LVQ2.1. The former shows close to optimal performance, while LVQ+/ displays divergent behavior. We investigate how early stopping can overcome this difficulty. Furthermore, we study a crisp version of robust soft LVQ, which was recently derived from a statistical formulation. Surprisingly, it exhibits relatively poor generalization. Performance improves if a window for the selection of data is introduced; the resulting algorithm corresponds to cost function based LVQ2. The dependence of these results on the model parameters, for example, prior class probabilities, is investigated systematically, simulations confirm our analytical findings. Keywords: prototype based classification, learning vector quantization, WinnerTakesAll algorithms, online learning, competitive learning 1.
Prototype Selection for Composite Nearest Neighbor Classifiers
, 1997
"... Combining the predictions of a set of classifiers has been shown to be an effective way to create composite classifiers that are more accurate than any of the component classifiers. Increased accuracy has been shown in a variety of realworld applications, ranging from protein sequence identificatio ..."
Abstract

Cited by 30 (1 self)
 Add to MetaCart
Combining the predictions of a set of classifiers has been shown to be an effective way to create composite classifiers that are more accurate than any of the component classifiers. Increased accuracy has been shown in a variety of realworld applications, ranging from protein sequence identification to determining the fat content of ground meat. Despite such individual successes, the answers are not known to fundamental questions about classifier combination, such as "Can classifiers from any given model class be combined to create a composite classifier with higher accuracy?" or "Is it possible to increase the accuracy of a given classifier by combining its predictions with those of only a small number o...
A Global Optimization Technique for Statistical Classifier Design
 IEEE Transactions on Signal Processing
"... A global optimization method is introduced for the design of statistical classifiers that minimize the rate of misclassification. We first derive the theoretical basis for the method, based on which we develop a novel design algorithm and demonstrate its effectiveness and superior performance in the ..."
Abstract

Cited by 28 (10 self)
 Add to MetaCart
(Show Context)
A global optimization method is introduced for the design of statistical classifiers that minimize the rate of misclassification. We first derive the theoretical basis for the method, based on which we develop a novel design algorithm and demonstrate its effectiveness and superior performance in the design of practical classifiers for some of the most popular structures currently in use. The method, grounded in ideas from statistical physics and information theory, extends the deterministic annealing approach for optimization, both to incorporate structural constraints on data assignments to classes and to minimize the probability of error as the cost objective. During the design, data are assigned to classes in probability, so as to minimize the expected classification error given a specified level of randomness, as measured by Shannon's entropy. The constrained optimization is equivalent to a free energy minimization, motivating a deterministic annealing approach in which the entropy...
Using Vector Quantization for Image Processing
 Proc. IEEE
, 1993
"... Image compression is the process of reducing the number of bits required to represent an image. Vector quantization, the mapping of pixel intensity vectors into binary vectors indexing a limited number of possible reproductions, is a popular image compression algorithm. Compression has traditionally ..."
Abstract

Cited by 27 (2 self)
 Add to MetaCart
(Show Context)
Image compression is the process of reducing the number of bits required to represent an image. Vector quantization, the mapping of pixel intensity vectors into binary vectors indexing a limited number of possible reproductions, is a popular image compression algorithm. Compression has traditionally been done with little regard for image processing operations that may precede or follow the compression step. Recent work has used vector quantization both to simplify image processing tasks  such as enhancement, classification, halftoning, and edge detection  and to reduce the computational complexity by performing them simultaneously with the compression. After briefly reviewing the fundamental ideas of vector quantization, we present a survey of vector quantization algorithms that perform image processing. 1 Introduction Data compression is the mapping of a data set into a bit stream to decrease the number of bits required to represent the data set. With data compression, one can st...
SelfOrganizing Process Based On Lateral Inhibition And Synaptic Resource Redistribution
 In Proceedings of the International Conference on Artificial Neural Networks
, 1991
"... implementation Selforganization can be efficiently implemented based on Euclidian distance and global supervision. It is not necessary to explicitly model the connections between the units in the network. Every unit computes the distance between its weight vector and the input vector. An external ..."
Abstract

Cited by 23 (7 self)
 Add to MetaCart
(Show Context)
implementation Selforganization can be efficiently implemented based on Euclidian distance and global supervision. It is not necessary to explicitly model the connections between the units in the network. Every unit computes the distance between its weight vector and the input vector. An external supervisor finds the unit with the smallest distance, looks up the current neighborhood radius from a training schedule, and tells the units within this radius to modify their input weights. The weight adaptations are proportional to the Euclidian difference. The weights of unit (i; j) in a 2D map are (a) 0 samples (b) 30 samples (c) 100 samples (d) 10,000 samples Figure 1: Abstract implementation of selforganization. The map consists of 20 \Theta 20 units in a 2D array organization. The weight vector of each unit is shown as a point on the unit square 0 x; y 1. Each vector is connected with a line to the weight vectors of the four neighboring units. In other words, each intersection ...
Using SelfOrganizing Maps and Learning Vector Quantization for Mixture Density Hidden Markov Models
, 1997
"... This work presents experiments to recognize pattern sequences using hidden Markov models (HMMs). The pattern sequences in the experiments are computed from speech signals and the recognition task is to decode the corresponding phoneme sequences. The training of the HMMs of the phonemes using the col ..."
Abstract

Cited by 22 (9 self)
 Add to MetaCart
This work presents experiments to recognize pattern sequences using hidden Markov models (HMMs). The pattern sequences in the experiments are computed from speech signals and the recognition task is to decode the corresponding phoneme sequences. The training of the HMMs of the phonemes using the collected speech samples is a difficult task because of the natural variation in the speech. Two neural computing paradigms, the SelfOrganizing Map (SOM) and the Learning Vector Quantization (LVQ) are used in the experiments to improve the recognition performance of the models. A HMM consists of sequential states which are trained to model the feature changes in the signal produced during the modeled process. The output densities applied in this work are mixtures of Gaussian density functions. SOMs are applied to initialize and train the mixtures to give a smooth and faithful presentation of the feature vector space defined by the corresponding training samples. The SOM maps similar feature vect...