Results 1 
7 of
7
The Acquisition of Lexical Semantics for Spatial Terms: A Connectionist Model of Perceptual Categories
, 1992
"... This thesis describes a connectionist model which learns to perceive spatial events and relations in simple movies of 2dimensional objects, so as to name the events and relations as a speaker of a particular natural language would. Thus, the model learns perceptually grounded semantics for natura ..."
Abstract

Cited by 43 (2 self)
 Add to MetaCart
This thesis describes a connectionist model which learns to perceive spatial events and relations in simple movies of 2dimensional objects, so as to name the events and relations as a speaker of a particular natural language would. Thus, the model learns perceptually grounded semantics for natural language spatial terms. Natural languages differ  sometimes dramatically  in the ways in which they structure space. The aim here has been to have the model be able to perform this learning task for terms from any natural language, and to have learning take place in the absence of explicit negative evidence, in order to rule out ad hoc solutions and to approximate the conditions under which children learn. The central focus of this thesis is a...
Using SelfOrganizing Maps and Learning Vector Quantization for Mixture Density Hidden Markov Models
, 1997
"... This work presents experiments to recognize pattern sequences using hidden Markov models (HMMs). The pattern sequences in the experiments are computed from speech signals and the recognition task is to decode the corresponding phoneme sequences. The training of the HMMs of the phonemes using the col ..."
Abstract

Cited by 21 (9 self)
 Add to MetaCart
This work presents experiments to recognize pattern sequences using hidden Markov models (HMMs). The pattern sequences in the experiments are computed from speech signals and the recognition task is to decode the corresponding phoneme sequences. The training of the HMMs of the phonemes using the collected speech samples is a difficult task because of the natural variation in the speech. Two neural computing paradigms, the SelfOrganizing Map (SOM) and the Learning Vector Quantization (LVQ) are used in the experiments to improve the recognition performance of the models. A HMM consists of sequential states which are trained to model the feature changes in the signal produced during the modeled process. The output densities applied in this work are mixtures of Gaussian density functions. SOMs are applied to initialize and train the mixtures to give a smooth and faithful presentation of the feature vector space defined by the corresponding training samples. The SOM maps similar feature vect...
Discriminative Training of Hidden Markov Models
, 1998
"... vi Abbreviations vii Notation viii 1 Introduction 1 2 Hidden Markov Models 4 2.1 Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 2.2 HMM Modelling Assumptions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 2.3 HMM Topology . . . . . . . . . ..."
Abstract

Cited by 20 (0 self)
 Add to MetaCart
vi Abbreviations vii Notation viii 1 Introduction 1 2 Hidden Markov Models 4 2.1 Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 2.2 HMM Modelling Assumptions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 2.3 HMM Topology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 2.4 Finding the Best Transcription . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 2.5 Setting the Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 2.6 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 3 Objective Functions 19 3.1 Properties of Maximum Likelihood Estimators . . . . . . . . . . . . . . . . . . . 19 3.2 Maximum Likelihood . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 3.3 Maximum Mutual Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 3.4 Frame Discrimination . . . . . . . . . . . . . . . . ....
Connectionist Probability Estimation In The Decipher Speech Recognition System
, 1992
"... Previously, we have demonstrated that feedforward networks may be used to estimate local output probabilities in hidden Markov model (HMM) speech recognition systems. Here these connectionist techniques are integrated into the DECIPHER system, with experiments being performed using the speaker inde ..."
Abstract

Cited by 14 (6 self)
 Add to MetaCart
Previously, we have demonstrated that feedforward networks may be used to estimate local output probabilities in hidden Markov model (HMM) speech recognition systems. Here these connectionist techniques are integrated into the DECIPHER system, with experiments being performed using the speaker independent DARPA RM database. Our results indicate that: . connectionist probability estimation can improve performance of a context independent maximum likelihood trained HMM system, . performance of the connectionist system is close to what can be achieved using (context dependent) HMM systems of much higher complexity, and . mixing connectionist and maximum likelihood estimates can improve the performance of a stateoftheart context dependent HMM system. 1 INTRODUCTION Previous investigations, both theoretical and experimental, have indicated that feedforward networks (typically, multilayer perceptrons, MLPs) may be used to estimate local HMM output probabilities [1, 6]. Our previous p...
Connectionist Speech Recognition: Status and Prospects
, 1991
"... We report on recent advances in the ICSI connectionist speech recognition project. Highlights include: . Experimental results showing that connectionist methods can improve the performance of a context independent maximum likelihood trained HMM system, resulting in a performance close to that achie ..."
Abstract

Cited by 7 (5 self)
 Add to MetaCart
We report on recent advances in the ICSI connectionist speech recognition project. Highlights include: . Experimental results showing that connectionist methods can improve the performance of a context independent maximum likelihood trained HMM system, resulting in a performance close to that achieved using state of the art context dependent HMM systems of much higher complexity. . Mixing (context independent) connectionist probability estimates with maximum likelihood trained context dependent models to improve the performance of a state of the art system . The development of a network decomposition method that allows connectionist modelling of context dependent phones efficiently and parsimoniously, with no statistical independence assumptions. . L&H Speechproducts, Ieper, B8900 Belgium. y. SRI International, Menlo Park CA 94025, USA. Part I INTRODUCTION The dominant approach to automatic continuous speech recognition is statistical [5, 7]. The resulting methods, which use cru...
Knowing What You Don’t Know: Roles for Confidence Measures in Automatic Speech Recognition
, 1999
"... The development of reliable measures of confidence for the decoding of speech sounds by machine has the potential to greatly enhance the ‘stateoftheart ’ in the field of automatic speech recognition (ASR). This dissertation describes the derivation of several complimentary confidence measures fro ..."
Abstract
 Add to MetaCart
The development of reliable measures of confidence for the decoding of speech sounds by machine has the potential to greatly enhance the ‘stateoftheart ’ in the field of automatic speech recognition (ASR). This dissertation describes the derivation of several complimentary confidence measures from a socalled acceptor hidden Markov model (HMM) based large vocabulary continuous speech recognition system, and their application to a variety of tasks pertaining to ASR in realistic environments. A key contribution of the thesis is the demonstration that if a rather general definition of what constitutes a confidence measure is adopted, a framework results within which it is possible to explore the utility of confidence measures throughout the recognition process. This general definition accrues additional benefits when used in conjunction with a set of more specific confidence measure categories. The fundamental difference between an acceptor HMM and one which adheres to the more common generative formulation is the acceptor’s ability to directly estimate the posterior probability of a class of speech sound given some acoustic observations. Posterior class probabilities, unlike the class conditional likelihoods estimated by generative HMMs, provide measures of model match which are
Connectionist Probability Estimation In The Decipher Speech Recognition System
, 1992
"... Previously, we have demonstrated that feedforward networks may be used to estimate local output probabilities in hidden Markov model (HMM) speech recognition systems. Here these connectionist techniques are integrated into the DECIPHER system, with experiments being performed using the speaker inde ..."
Abstract
 Add to MetaCart
Previously, we have demonstrated that feedforward networks may be used to estimate local output probabilities in hidden Markov model (HMM) speech recognition systems. Here these connectionist techniques are integrated into the DECIPHER system, with experiments being performed using the speaker independent DARPA RM database. Our results indicate that: . connectionist probability estimation can improve performance of a context independent maximum likelihood trained HMM system, . performance of the connectionist system is close to what can be achieved using (context dependent) HMM systems of much higher complexity, and . mixing connectionist and maximum likelihood estimates can improve the performance of a stateoftheart context dependent HMM system.