Results 1  10
of
82
Quantization
 IEEE TRANS. INFORM. THEORY
, 1998
"... The history of the theory and practice of quantization dates to 1948, although similar ideas had appeared in the literature as long ago as 1898. The fundamental role of quantization in modulation and analogtodigital conversion was first recognized during the early development of pulsecode modula ..."
Abstract

Cited by 639 (11 self)
 Add to MetaCart
The history of the theory and practice of quantization dates to 1948, although similar ideas had appeared in the literature as long ago as 1898. The fundamental role of quantization in modulation and analogtodigital conversion was first recognized during the early development of pulsecode modulation systems, especially in the 1948 paper of Oliver, Pierce, and Shannon. Also in 1948, Bennett published the first highresolution analysis of quantization and an exact analysis of quantization noise for Gaussian processes, and Shannon published the beginnings of rate distortion theory, which would provide a theory for quantization as analogtodigital conversion and as data compression. Beginning with these three papers of fifty years ago, we trace the history of quantization from its origins through this decade, and we survey the fundamentals of the theory and many of the popular and promising techniques for quantization.
Self Organization of a Massive Document Collection
 IEEE Transactions on Neural Networks
"... This article describes the implementation of a system that is able to organize vast document collections according to textual similarities. It is based on the SelfOrganizing Map (SOM) algorithm. As the feature vectors for the documents we use statistical representations of their vocabularies. The m ..."
Abstract

Cited by 204 (14 self)
 Add to MetaCart
This article describes the implementation of a system that is able to organize vast document collections according to textual similarities. It is based on the SelfOrganizing Map (SOM) algorithm. As the feature vectors for the documents we use statistical representations of their vocabularies. The main goal in our work has been to scale up the SOM algorithm to be able to deal with large amounts of highdimensional data. In a practical experiment we mapped 6,840,568 patent abstracts onto a 1,002,240node SOM. As the feature vectors we used 500dimensional vectors of stochastic figures obtained as random projections of weighted word histograms. Keywords Data mining, exploratory data analysis, knowledge discovery, large databases, parallel implementation, random projection, SelfOrganizing Map (SOM), textual documents. I. Introduction A. From simple searches to browsing of selforganized data collections Locating documents on the basis of keywords and simple search expressions is a c...
Person identification using multiple cues
 IEEE Transactions on Pattern Analysis and Machine Intelligence
, 1995
"... AbstractThis paper presents a person identification system based on acoustic and visual features. The system is organized as a set of nonhomogeneous classifiers whose outputs are integrated after a normalization step. In particular, two classifiers based on acoustic features and three based on vis ..."
Abstract

Cited by 171 (1 self)
 Add to MetaCart
AbstractThis paper presents a person identification system based on acoustic and visual features. The system is organized as a set of nonhomogeneous classifiers whose outputs are integrated after a normalization step. In particular, two classifiers based on acoustic features and three based on visual ones provide data for an integration module whose performance is evaluated. A novel technique for the integration of multiple classifiers at an hybrid ranWmeasurement level is introduced using HyperBF networks. Two different methods for the rejection of an unknown person are introduced. The performance of the integrated system is shown to be superior to that of the acoustic and visual subsystems. The resulting identification system can be used to log personal access and, with minor modifications, as an identity verification system. Index TennsTemplate matching, robust statistics, correlation, face recognition, speaker recognition, learning, classification. I.
Signal modeling techniques in speech recognition
 PROCEEDINGS OF THE IEEE
, 1993
"... We have seen three important trends develop in the last five years in speech recognition. First, heterogeneous parameter sets that mix absolute spectral information with dynamic, or timederivative, spectral information, have become common. Second, similariry transform techniques, often used to norm ..."
Abstract

Cited by 126 (5 self)
 Add to MetaCart
We have seen three important trends develop in the last five years in speech recognition. First, heterogeneous parameter sets that mix absolute spectral information with dynamic, or timederivative, spectral information, have become common. Second, similariry transform techniques, often used to normalize and decorrelate parameters in some computationally inexpensive way, have become popular. Third, the signal parameter estimation problem has merged with the speech recognition process so that more sophisticated statistical models of the signalâ€™s spectrum can be estimated in a closedloop manner. In this paper, we review the signal processing components of these algorithms. These algorithms are presented as part of a unified view of the signal parameterization problem in which there are three major tasks: measurement, transformation, and statistical modeling. This paper is by no means a comprehensive survey of all possible techniques of signal modeling in speech recognition. There are far too many algorithms in use today to make an exhaustive survey feasible (and cohesive). Instead, this paper is meant to serve as a tutorial on signal processing in stateoftheart speech recognition systems and to review those techniques most commonly used. In keeping with this goal, a complete mathematical description of each algorithm has been included in the paper.
Data Exploration Using SelfOrganizing Maps
 ACTA POLYTECHNICA SCANDINAVICA: MATHEMATICS, COMPUTING AND MANAGEMENT IN ENGINEERING SERIES NO. 82
, 1997
"... Finding structures in vast multidimensional data sets, be they measurement data, statistics, or textual documents, is difficult and timeconsuming. Interesting, novel relations between the data items may be hidden in the data. The selforganizing map (SOM) algorithm of Kohonen can be used to aid the ..."
Abstract

Cited by 96 (4 self)
 Add to MetaCart
Finding structures in vast multidimensional data sets, be they measurement data, statistics, or textual documents, is difficult and timeconsuming. Interesting, novel relations between the data items may be hidden in the data. The selforganizing map (SOM) algorithm of Kohonen can be used to aid the exploration: the structures in the data sets can be illustrated on special map displays. In this work, the methodology of using SOMs for exploratory data analysis or data mining is reviewed and developed further. The properties of the maps are compared with the properties of related methods intended for visualizing highdimensional multivariate data sets. In a set of case studies the SOM algorithm is applied to analyzing electroencephalograms, to illustrating structures of the standard of living in the world, and to organizing fulltext document collections. Measures are proposed for evaluating the quality of different types of maps in representing a given data set, and for measuring the robu...
LVQ PAK: The Learning Vector Quantization Program Package
 Helsinki University of Technology, Laboratory of Computer
, 1996
"... : Learning Vector Quantization (LVQ) is a group of algorithms applicable to statistical pattern recognition, in which the classes are described by a relatively small number of codebook vectors, properly placed within each zone such that the decision borders are approximated by the nearestneighbor r ..."
Abstract

Cited by 86 (1 self)
 Add to MetaCart
: Learning Vector Quantization (LVQ) is a group of algorithms applicable to statistical pattern recognition, in which the classes are described by a relatively small number of codebook vectors, properly placed within each zone such that the decision borders are approximated by the nearestneighbor rule. The LVQ PAK program package contains all programs necessary for the correct application of certain Learning Vector Quantization algorithms in an arbitrary statistical classification or pattern recognition task, as well as a program for the monitoring of the codebook vectors at any time during the learning process. The first version 1.0 of this program package was published in 1991 and since then the package has been updated regularly to include latest improvements in the LVQ implementations. This report that contains the last documentation was prepared for bibliographical purposes. Contents 1 Introduction 4 1.1 Contents of this package : : : : : : : : : : : : : : : : : : : : : : : :...
Clustering Based on Conditional Distributions in an Auxiliary Space
 Neural Computation
, 2001
"... We study the problem of learning groups or categories that are local ..."
Abstract

Cited by 79 (22 self)
 Add to MetaCart
We study the problem of learning groups or categories that are local
Exact and Approximation Algorithms for Clustering
, 1997
"... In this paper we present a n O(k 1\Gamma1=d ) time algorithm for solving the kcenter problem in R d , under L1 and L 2 metrics. The algorithm extends to other metrics, and can be used to solve the discrete kcenter problem, as well. We also describe a simple (1 + ffl)approximation algorith ..."
Abstract

Cited by 57 (5 self)
 Add to MetaCart
In this paper we present a n O(k 1\Gamma1=d ) time algorithm for solving the kcenter problem in R d , under L1 and L 2 metrics. The algorithm extends to other metrics, and can be used to solve the discrete kcenter problem, as well. We also describe a simple (1 + ffl)approximation algorithm for the kcenter problem, with running time O(n log k) + (k=ffl) O(k 1\Gamma1=d ) . Finally, we present a n O(k 1\Gamma1=d ) time algorithm for solving the Lcapacitated kcenter problem, provided that L = \Omega\Gamma n=k 1\Gamma1=d ) or L = O(1). We conclude with a simple approximation algorithm for the Lcapacitated kcenter problem. The work on this paper was partially supported by a National Science Foundation Grant CCR9301259, by an Army Research Office MURI grant DAAH049610013, by a Sloan fellowship, by an NYI award and matching funds from Xerox Corporation, and by a grant from the U.S.Israeli Binational Science Foundation. y Department of Computer Science, Box ...
Compression of Multispectral Images by ThreeDimensional SPIHT Algorithm
 IEEE Transactions on Geoscience and Remote Sensing
, 2000
"... We carry out low bitrate compression of multispectral images by means of the Said and Pearlman's SPIHT algorithm, suitably modified to take into account the interband dependencies. Two techniques are proposed: in the first, a threedimensional (3D) transform is taken (wavelet in the spatial domain ..."
Abstract

Cited by 46 (1 self)
 Add to MetaCart
We carry out low bitrate compression of multispectral images by means of the Said and Pearlman's SPIHT algorithm, suitably modified to take into account the interband dependencies. Two techniques are proposed: in the first, a threedimensional (3D) transform is taken (wavelet in the spatial domain, KarhunenLoeve in the spectral domain) and a simple 3D SPIHT is used; in the second, after taking a spatial wavelet transform, spectral vectors of pixels are vector quantized and a gaindriven SPIHT is used. Numerous experiments on two sample multispectral images show very good performance for both algorithms.
Situs: A package for docking crystal structures into lowresolution maps from electron microscopy
 J. Struct. Biol
, 1999
"... Threedimensional image reconstructions of largescale protein aggregates are routinely determined by electron microscopy (EM). We combine lowresolution EM data with highresolution structures of proteins determined by xray crystallography. A set of visualization and analysis procedures, termed the ..."
Abstract

Cited by 44 (3 self)
 Add to MetaCart
Threedimensional image reconstructions of largescale protein aggregates are routinely determined by electron microscopy (EM). We combine lowresolution EM data with highresolution structures of proteins determined by xray crystallography. A set of visualization and analysis procedures, termed the Situs package, has been developed to provide an efficient and robust method for the localization of protein subunits in lowresolution data. Topologyrepresenting neural networks are employed to vectorquantize and to correlate features within the structural data sets. Microtubules decorated with kinesinrelated ncd motors are used as model aggregates to demonstrate the utility of this package of routines. The precision of the docking has allowed for the extraction of unique conformations of the macromolecules and is limited only by the reliability of the underlying structural data. 1999 Academic Press Key Words: topology representing neural networks; multiresolution; visualization; macromolecular