Results 1 
2 of
2
Using Vector Quantization for Image Processing
 Proc. IEEE
, 1993
"... Image compression is the process of reducing the number of bits required to represent an image. Vector quantization, the mapping of pixel intensity vectors into binary vectors indexing a limited number of possible reproductions, is a popular image compression algorithm. Compression has traditionally ..."
Abstract

Cited by 23 (1 self)
 Add to MetaCart
Image compression is the process of reducing the number of bits required to represent an image. Vector quantization, the mapping of pixel intensity vectors into binary vectors indexing a limited number of possible reproductions, is a popular image compression algorithm. Compression has traditionally been done with little regard for image processing operations that may precede or follow the compression step. Recent work has used vector quantization both to simplify image processing tasks  such as enhancement, classification, halftoning, and edge detection  and to reduce the computational complexity by performing them simultaneously with the compression. After briefly reviewing the fundamental ideas of vector quantization, we present a survey of vector quantization algorithms that perform image processing. 1 Introduction Data compression is the mapping of a data set into a bit stream to decrease the number of bits required to represent the data set. With data compression, one can st...
Efficient Higherorder Neural Networks for Classification and Function Approximation
 International Journal of Neural Systems
, 1995
"... This paper introduces a class of higherorder networks called pisigma networks (PSNs). PSNs are feedforward networks with a single "hidden" layer of linear summing units, and with product units in the output layer. A PSN uses these product units to indirectly incorporate the capabilities of higher ..."
Abstract

Cited by 11 (1 self)
 Add to MetaCart
This paper introduces a class of higherorder networks called pisigma networks (PSNs). PSNs are feedforward networks with a single "hidden" layer of linear summing units, and with product units in the output layer. A PSN uses these product units to indirectly incorporate the capabilities of higherorder networks while greatly reducing network complexity. PSNs have only one layer of adjustable weights and exhibit fast learning. A PSN with K summing units provides a constrained Kth order approximation of a continuous function. A generalization of the PSN is presented that can uniformly approximate any measureable function. The use of linear hidden units makes it possible to mathematically study the convergence properties of various LMS type learning algorithms for PSNs. We show that it is desirable to update only a partial set of weights at a time rather than synchronously updating all the weights. Bounds for learning rates which guarantee convergence are derived. Several simulation re...