Results 1 
5 of
5
On the Determinization of Weighted Finite Automata
 SIAM J. Comput
, 1998
"... . We study determinization of weighted finitestate automata (WFAs), which has important applications in automatic speech recognition (ASR). We provide the first polynomialtime algorithm to test for the twins property, which determines if a WFA admits a deterministic equivalent. We also provide ..."
Abstract

Cited by 18 (0 self)
 Add to MetaCart
. We study determinization of weighted finitestate automata (WFAs), which has important applications in automatic speech recognition (ASR). We provide the first polynomialtime algorithm to test for the twins property, which determines if a WFA admits a deterministic equivalent. We also provide a rigorous analysis of a determinization algorithm of Mohri, with tight bounds for acyclic WFAs. Given that WFAs can expand exponentially when determinized, we explore why those used in ASR tend to shrink. The folklore explanation is that ASR WFAs have an acyclic, multipartite structure. We show, however, that there exist such WFAs that always incur exponential expansion when determinized. We then introduce a class of WFAs, also with this structure, whose expansion depends on the weights: some weightings cause them to shrink, while others, including random weightings, cause them to expand exponentially. We provide experimental evidence that ASR WFAs exhibit this weight dependence. ...
Speeding Up HMM Decoding and Training by Exploiting Sequence Repetitions
"... We present a method to speed up the dynamic program algorithms used for solving the HMM decoding and training problems for discrete timeindependent HMMs. We discuss the application of our method to Viterbi’s decoding and training algorithms [33], as well as to the forwardbackward and BaumWelch [ ..."
Abstract

Cited by 12 (5 self)
 Add to MetaCart
We present a method to speed up the dynamic program algorithms used for solving the HMM decoding and training problems for discrete timeindependent HMMs. We discuss the application of our method to Viterbi’s decoding and training algorithms [33], as well as to the forwardbackward and BaumWelch [6] algorithms. Our approach is based on identifying repeated substrings in the observed input sequence. Initially, we show how to exploit repetitions of all sufficiently small substrings (this is similar to the Four Russians method). Then, we describe four algorithms based alternatively on run length encoding (RLE), LempelZiv (LZ78) parsing, grammarbased compression (SLP), and byte pair encoding (BPE). Compared to Viterbi’s algorithm, we achieve speedups of Θ(log n) using the Four Russians method, Ω ( r log n r) using RLE, Ω ( ) using LZ78, Ω ( ) using SLP, and Ω(r) using BPE, where k is the number log r k k of hidden states, n is the length of the observed sequence and r is its compression ratio (under each compression scheme). Our experimental results demonstrate that our new algorithms are indeed faster in practice. Furthermore, unlike Viterbi’s algorithm, our algorithms are highly parallelizable.
Accelerating Dynamic Programming
, 2009
"... Dynamic Programming (DP) is a fundamental problemsolving technique that has been widely used for solving a broad range of search and optimization problems. While DP can be invoked when more specialized methods fail, this generality often incurs a cost in efficiency. We explore a unifying toolkit fo ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Dynamic Programming (DP) is a fundamental problemsolving technique that has been widely used for solving a broad range of search and optimization problems. While DP can be invoked when more specialized methods fail, this generality often incurs a cost in efficiency. We explore a unifying toolkit for speeding up DP, and algorithms that use DP as subroutines. Our methods and results can be summarized as follows. – Acceleration via Compression. Compression is traditionally used to efficiently store data. We use compression in order to identify repeats in the table that imply a redundant computation. Utilizing these repeats requires a new DP, and often different DPs for different compression schemes. We present the first provable speedup of the celebrated Viterbi algorithm (1967) that is used for the decoding and training of Hidden Markov Models (HMMs). Our speedup relies on the compression of the HMM’s observable sequence. – Totally Monotone Matrices. It is well known that a wide variety of DPs can be reduced to the problem of finding row minima in totally monotone matrices. We introduce this scheme in the context of planar graph problems. In particular, we show that planar graph problems
On Reduction via Determinization of SpeechRecognition Lattices
, 1997
"... We establish a framework for studying the behavior of automatic speech recognition (ASR) lattices (viewed as automata) undergoing determinization. Using this framework, we provide initial insights into what causes determinization to produce smaller (or bigger) lattices when used in the ASR applicati ..."
Abstract
 Add to MetaCart
We establish a framework for studying the behavior of automatic speech recognition (ASR) lattices (viewed as automata) undergoing determinization. Using this framework, we provide initial insights into what causes determinization to produce smaller (or bigger) lattices when used in the ASR application. Our results counter the prevailing wisdom that the graph topology underlying an automaton, not the weights on the arcs, governs deterministic expansion. We show that there are graphs that expand solely due to their weights when determinized; i.e., we demonstrate graphs that expand under some weightings yet contract under others. Furthermore, we give evidence that the automata that arise in ASR are either the kind that never expand or else the weightdependent kind; i.e., we do not find in ASR any instances of automata that always expand under determinization. Therefore, understanding what causes weight dependence becomes essential to providing tools to avoid deterministic expansion in AS...
Adverse Conditions and ASR Techniques for Robust Speech User Interface
"... The main motivation for Automatic Speech Recognition (ASR) is efficient interfaces to computers, and for the interfaces to be natural and truly useful, it should provide coverage for a large group of users. The purpose of these tasks is to further improve manmachine communication. ASR systems exhib ..."
Abstract
 Add to MetaCart
The main motivation for Automatic Speech Recognition (ASR) is efficient interfaces to computers, and for the interfaces to be natural and truly useful, it should provide coverage for a large group of users. The purpose of these tasks is to further improve manmachine communication. ASR systems exhibit unacceptable degradations in performance when the acoustical environments used for training and testing the system are not the same. The goal of this research is to increase the robustness of the speech recognition systems with respect to changes in the environment. A system can be labeled as environmentindependent if the recognition accuracy for a new environment is the same or higher than that obtained when the system is retrained for that environment. Attaining such performance is the dream of the researchers. This paper elaborates some of the difficulties with Automatic Speech Recognition (ASR). These difficulties are classified into Speakers characteristics and environmental conditions, and tried to suggest some techniques to compensate variations in speech signal. This paper focuses on the robustness with respect to speakers’ variations and changes in the acoustical environment. We discussed several different external factors that change the environment and physiological differences that affect the performance of a speech recognition system followed by techniques that are helpful to design a robust ASR system.