Results 1  10
of
51
Two decades of statistical language modeling: Where do we go from here
 Proceedings of the IEEE
, 2000
"... Statistical Language Models estimate the distribution of various natural language phenomena for the purpose of speech recognition and other language technologies. Since the first significant model was proposed in 1980, many attempts have been made to improve the state of the art. We review them here ..."
Abstract

Cited by 192 (1 self)
 Add to MetaCart
(Show Context)
Statistical Language Models estimate the distribution of various natural language phenomena for the purpose of speech recognition and other language technologies. Since the first significant model was proposed in 1980, many attempts have been made to improve the state of the art. We review them here, point to a few promising directions, and argue for a Bayesian approach to integration of linguistic theories with data. 1. OUTLINE Statistical language modeling (SLM) is the attempt to capture regularities of natural language for the purpose of improving the performance of various natural language applications. By and large, statistical language modeling amounts to estimating the probability distribution of various linguistic units, such as words, sentences, and whole documents. Statistical language modeling is crucial for a large variety of language technology applications. These include speech recognition (where SLM got its start), machine translation, document classification and routing, optical character recognition, information retrieval, handwriting recognition, spelling correction, and many more. In machine translation, for example, purely statistical approaches have been introduced in [1]. But even researchers using rulebased approaches have found it beneficial to introduce some elements of SLM and statistical estimation [2]. In information retrieval, a language modeling approach was recently proposed by [3], and a statistical/information theoretical approach was developed by [4]. SLM employs statistical estimation techniques using language training data, that is, text. Because of the categorical nature of language, and the large vocabularies people naturally use, statistical techniques must estimate a large number of parameters, and consequently depend critically on the availability of large amounts of training data.
Markovian Models for Sequential Data
, 1996
"... Hidden Markov Models (HMMs) are statistical models of sequential data that have been used successfully in many machine learning applications, especially for speech recognition. Furthermore, in the last few years, many new and promising probabilistic models related to HMMs have been proposed. We firs ..."
Abstract

Cited by 109 (2 self)
 Add to MetaCart
(Show Context)
Hidden Markov Models (HMMs) are statistical models of sequential data that have been used successfully in many machine learning applications, especially for speech recognition. Furthermore, in the last few years, many new and promising probabilistic models related to HMMs have been proposed. We first summarize the basics of HMMs, and then review several recent related learning algorithms and extensions of HMMs, including in particular hybrids of HMMs with artificial neural networks, InputOutput HMMs (which are conditional HMMs using neural networks to compute probabilities), weighted transducers, variablelength Markov models and Markov switching statespace models. Finally, we discuss some of the challenges of future research in this very active area. 1 Introduction Hidden Markov Models (HMMs) are statistical models of sequential data that have been used successfully in many applications in artificial intelligence, pattern recognition, speech recognition, and modeling of biological ...
Learning Variable Length Markov Models of Behaviour
 Computer Vision and Image Understanding
, 2001
"... In recent years therehasbeen an increasedinterest in the modelling and recognition of human activities involving highly structured and semantically rich behaviour such as dance, aerobics, and sign language. A novel approachispresented for automatically acquiring stochastic models of the highlevel s ..."
Abstract

Cited by 71 (4 self)
 Add to MetaCart
(Show Context)
In recent years therehasbeen an increasedinterest in the modelling and recognition of human activities involving highly structured and semantically rich behaviour such as dance, aerobics, and sign language. A novel approachispresented for automatically acquiring stochastic models of the highlevel structureof an activity without the assumption of any prior knowledge. The process involves temporal segmentation into plausible atomic behaviour components and the use of variable length Markov models for the efficient representation of behaviours. Experimental results arepresented which demonstrate the synthesis of realistic sample behaviours and the performanceofmodels for longterm temporal prediction. Keywords: modelling behaviour, behaviour prediction, behaviour synthesis, variable length Markov models, Markov models, Ngrams, hidden Markov models, probabilistic finite state automata, statistical grammars, computer animation. 2 1
Architectural Bias in Recurrent Neural Networks  Fractal Analysis
 IEEE TRANSACTIONS ON NEURAL NETWORKS
"... We have recently shown that when initialized with "small" weights, recurrent neural networks (RNNs) with standard sigmoidtype activation functions are inherently biased towards Markov models, i.e. even prior to any training, RNN dynamics can be readily used to extract finite memory machin ..."
Abstract

Cited by 42 (9 self)
 Add to MetaCart
We have recently shown that when initialized with "small" weights, recurrent neural networks (RNNs) with standard sigmoidtype activation functions are inherently biased towards Markov models, i.e. even prior to any training, RNN dynamics can be readily used to extract finite memory machines (Hammer & Tino, 2002; Tino, Cernansky & Benuskova, 2002; Tino, Cernansky & Benuskova, 2002a). Following Christiansen and Chater (1999), we refer to this phenomenon as the architectural bias of RNNs. In this paper we further extend our work on the architectural bias in RNNs by performing a rigorous fractal analysis of recurrent activation patterns. We assume the network is driven by sequences obtained by traversing an underlying finitestate transition diagram  a scenario that has been frequently considered in the past e.g. when studying RNNbased learning and implementation of regular grammars and finitestate transducers. We obtain lower and upper bounds on various types of fractal dimensions, such as boxcounting and Hausdorff dimensions. It turns out that not only can the recurrent activations inside RNNs with small initial weights be explored to build Markovian predictive models, but also the activations form fractal clusters the dimension of which can be bounded by the scaled entropy of the underlying driving source. The scaling factors are fixed and are given by the RNN parameters.
Predicting the Future of Discrete Sequences From Fractal Representations of the Past
, 2001
"... We propose a novel approach for building nite memory predictive models similar in spirit to variable memory length Markov models (VLMMs). The models are constructed by rst transforming the nblock structure of the training sequence into a geometric structure of points in a unit hypercube, such ..."
Abstract

Cited by 36 (11 self)
 Add to MetaCart
We propose a novel approach for building nite memory predictive models similar in spirit to variable memory length Markov models (VLMMs). The models are constructed by rst transforming the nblock structure of the training sequence into a geometric structure of points in a unit hypercube, such that the longer is the common sux shared by any two nblocks, the closer lie their point representations.
Modeling Interaction Using Learnt Qualitative SpatioTemoral Relations and Variable Length Markov Models
 Proc. of the 15 th European Conference on Artificial Intelligence, 2002
"... Motivated by applications such as automated visual surveillance and video monitoring and annotation, there has been a lot of interest in constructing cognitive vision systems capable of interpreting the high level semantics of dynamic scenes. In this paper we present a novel approach for automatical ..."
Abstract

Cited by 31 (6 self)
 Add to MetaCart
(Show Context)
Motivated by applications such as automated visual surveillance and video monitoring and annotation, there has been a lot of interest in constructing cognitive vision systems capable of interpreting the high level semantics of dynamic scenes. In this paper we present a novel approach for automatically inferring models of object interactions that can be used to interpret observed behaviour within a scene. A realtime lowlevel computer vision system, together with an attentional control mechanism, are used to identify incidents or events that occur in the scene. A data driven approach has been taken in order to automatically infer discrete and abstract representations (symbols) of primitive object interactions; effectively the system learns a set of qualitative spatial relations relevant to the dynamic behaviour of the domain. These symbols then form the alphabet of a VLMM which automatically infers the high level structure of typical interactive behaviour. The learnt behaviour model has generative capabilities and is also capable of recognizing typical or atypical activities within a scene. Experiments have been performed within the traffic monitoring domain; however the proposed method is applicable to the general automatic surveillance task since it does not assume a priori knowledge of a specific domain. 1
Towards an Architecture for Cognitive Vision using Qualitative SparioTemporal Representations and Abduction
 In Spatial Cognition III
, 2002
"... In recent years there has been increasing interest in constructing cognitive vision systems capable of interpreting the high level semantics of dynamic scenes. Purely quantitative approaches to the task of constructing such systems have met with some success. However, qualitative analysis of dyn ..."
Abstract

Cited by 27 (1 self)
 Add to MetaCart
(Show Context)
In recent years there has been increasing interest in constructing cognitive vision systems capable of interpreting the high level semantics of dynamic scenes. Purely quantitative approaches to the task of constructing such systems have met with some success. However, qualitative analysis of dynamic scenes has the advantage of allowing easier generalisation of classes of different behaviours and guarding against the propagation of errors caused by uncertainty and noise in the quantitative data. Our aim is to integrate quantitative and qualitative modes of representation and reasoning for the analysis of dynamic scenes. In particular, in this paper we outline an approach for constructing cognitive vision systems using qualitative spatialtemporal representations including prototypical spatial relations and spatiotemporal event descriptors automatically inferred from input data. The overall architecture relies on abduction: the system searches for explanations, phrased in terms of the learned spatiotemporal event descriptors, to account for the video data.
Handwriting Synthesis From Handwritten Glyphs
 In Proceedings of the Fifth International Workshop on Frontiers of Handwriting Recognition
, 1996
"... We present a very straightforward approach to the problem of handwriting synthesis. A writer provides once a set of handwritten samples of the most frequent groups of letters found in natural text (glyphs). Our program can then translate typed notes into natural looking handwritten notes from tha ..."
Abstract

Cited by 26 (0 self)
 Add to MetaCart
(Show Context)
We present a very straightforward approach to the problem of handwriting synthesis. A writer provides once a set of handwritten samples of the most frequent groups of letters found in natural text (glyphs). Our program can then translate typed notes into natural looking handwritten notes from that writer. The method consists in assembling glyphs in a way that minimizes the total number of glyphs in each sentence. 1 Introduction Considerable effort has been focusing in the past few years on the problem of handwriting recognition [1]. Pen computers have raised the hope that maybe handwriting recognition would substitute keyboards as a computer interface. Yet, many computer users find that keyboards are more efficient than handwriting and will not revert to using handwriting. Some users type almost everything they write because it is faster than handwriting. Others prefer typing because their handwriting is illegible. In some cases, handwriting is preferable to typed text: it adds ...
Recurrent Neural Networks With Small Weights Implement Definite Memory Machines
 NEURAL COMPUTATION
, 2003
"... Recent experimental studies indicate that recurrent neural networks initialized with `small' weights are inherently biased towards definite memory machines (Tino, Cernansky, Benuskova, 2002a; Tino, Cernansky, Benuskova, 2002b). This paper establishes a theoretical counterpart: transition ..."
Abstract

Cited by 24 (6 self)
 Add to MetaCart
(Show Context)
Recent experimental studies indicate that recurrent neural networks initialized with `small' weights are inherently biased towards definite memory machines (Tino, Cernansky, Benuskova, 2002a; Tino, Cernansky, Benuskova, 2002b). This paper establishes a theoretical counterpart: transition function of recurrent network with small weights and `squashing ' activation function is a contraction. We prove that recurrent networks with contractive transition function can be approximated arbitrarily well on input sequences of unbounded length by a definite mem