Results 1 
3 of
3
Using Vector Quantization for Image Processing
 Proc. IEEE
, 1993
"... Image compression is the process of reducing the number of bits required to represent an image. Vector quantization, the mapping of pixel intensity vectors into binary vectors indexing a limited number of possible reproductions, is a popular image compression algorithm. Compression has traditionally ..."
Abstract

Cited by 23 (1 self)
 Add to MetaCart
Image compression is the process of reducing the number of bits required to represent an image. Vector quantization, the mapping of pixel intensity vectors into binary vectors indexing a limited number of possible reproductions, is a popular image compression algorithm. Compression has traditionally been done with little regard for image processing operations that may precede or follow the compression step. Recent work has used vector quantization both to simplify image processing tasks  such as enhancement, classification, halftoning, and edge detection  and to reduce the computational complexity by performing them simultaneously with the compression. After briefly reviewing the fundamental ideas of vector quantization, we present a survey of vector quantization algorithms that perform image processing. 1 Introduction Data compression is the mapping of a data set into a bit stream to decrease the number of bits required to represent the data set. With data compression, one can st...
Efficient Higherorder Neural Networks for Classification and Function Approximation
 International Journal of Neural Systems
, 1995
"... This paper introduces a class of higherorder networks called pisigma networks (PSNs). PSNs are feedforward networks with a single "hidden" layer of linear summing units, and with product units in the output layer. A PSN uses these product units to indirectly incorporate the capabilities ..."
Abstract

Cited by 11 (1 self)
 Add to MetaCart
This paper introduces a class of higherorder networks called pisigma networks (PSNs). PSNs are feedforward networks with a single "hidden" layer of linear summing units, and with product units in the output layer. A PSN uses these product units to indirectly incorporate the capabilities of higherorder networks while greatly reducing network complexity. PSNs have only one layer of adjustable weights and exhibit fast learning. A PSN with K summing units provides a constrained Kth order approximation of a continuous function. A generalization of the PSN is presented that can uniformly approximate any measureable function. The use of linear hidden units makes it possible to mathematically study the convergence properties of various LMS type learning algorithms for PSNs. We show that it is desirable to update only a partial set of weights at a time rather than synchronously updating all the weights. Bounds for learning rates which guarantee convergence are derived. Several simulation re...
unknown title
"... In this paper, four kinds of neural network classifiers have been used for the classification of underwater passive sonar signals radiated by ships. Classification process can be divided into two stages. In the preprocessing and feature extraction stage, TwoPass SplitWindows (TPSW) algorithm is us ..."
Abstract
 Add to MetaCart
In this paper, four kinds of neural network classifiers have been used for the classification of underwater passive sonar signals radiated by ships. Classification process can be divided into two stages. In the preprocessing and feature extraction stage, TwoPass SplitWindows (TPSW) algorithm is used to extract tonal features from the average power spectral density (APSD) of the input data. In the classification stage, four kinds of static neural network classifiers are used to evaluate the classification results, inclusive of the probabilistic based classifierProbabilistic Neural Network (PNN), the hyperplane based classifierMultilayer Perceptron (MLP), the kernel based classifierAdaptive Kernel Classifier (AKC), and the exemplar based classifierLearning Vector Quantization (LVQ). For comparison, the same classifiers but using Dyadic Wavelet Transform (DWT) in the feature extraction are used to evaluate the performance of the proposed method. Experimental results show that feature extraction using TPSW gives better classification performance than using DWT, but require more computation time. In neural network classifiers, exemplar classifiers give better performance than the others both in learning speed and classification rate. Moreover, the classifier using LVQ families with data extracted by DWT can reach the same correction rate (100%) as the classifier using various networks with data extracted by TPSW. Detail discussion for experimental results are also included.