Results 1  10
of
10
FiniteState SpeechToSpeech Translation
, 1997
"... A fully integrated approach to SpeechInput Language Translation in limiteddomain applications is presented. The mapping from the input to the output language is modeled in terms of a finite state translation model which is learned from examples of inputoutput sentences of the task considered. Thi ..."
Abstract

Cited by 81 (15 self)
 Add to MetaCart
(Show Context)
A fully integrated approach to SpeechInput Language Translation in limiteddomain applications is presented. The mapping from the input to the output language is modeled in terms of a finite state translation model which is learned from examples of inputoutput sentences of the task considered. This model is tightly integrated with standard acousticphonetic models of the input language and the resulting global model directly supplies, through Viterbi search, an optimal outputlanguage sentence for each input language utterance. Several extensions to this framework, recently developed to cope with the increasing difficulty of translation tasks, are reviewed. Finally, results for a task in the framework of hotel frontdesk communication, with a vocabulary of about 700 words, are reported.
Machine Translation with Inferred Stochastic FiniteState Transducers
 COMPUTATIONAL LINGUISTICS
, 2004
"... Finitestate transducers are models that are being used in different areas of pattern recognition and computational linguistics. One of these areas is machine translation, in which the approaches that are based on building models automatically from training examples are becoming more and more attrac ..."
Abstract

Cited by 79 (17 self)
 Add to MetaCart
Finitestate transducers are models that are being used in different areas of pattern recognition and computational linguistics. One of these areas is machine translation, in which the approaches that are based on building models automatically from training examples are becoming more and more attractive. Finitestate transducers are veryadequate for use in constrained tasks in which training samples of pairs of sentences are available. A technique for inferring finitestate transducers is proposed in this article. This technique is based on formalrelations between finitestate transducers and rational grammars. Given a training corpus of sourcetarget pairs of sentences, the proposed approach uses statistical alignment methods to produce a set of conventional strings from which a stochastic rational grammar (e.g., an ngram) is inferred. This grammar is finally converted into a finitestate transducer. The proposed methods are assessed through a series of machine translation experiments within the framework of the EuTrans project.
Efficient ErrorCorrecting Viterbi Parsing
, 1998
"... The problem of ErrorCorrecting Parsing (ECP) using an insertiondeletion substitution error model and a Finite State Machine is examined. The Viterbi algorithm can be straightforwardly extended to perform ECP, though the resulting computational complexity can become prohibitive for many applica ..."
Abstract

Cited by 9 (2 self)
 Add to MetaCart
The problem of ErrorCorrecting Parsing (ECP) using an insertiondeletion substitution error model and a Finite State Machine is examined. The Viterbi algorithm can be straightforwardly extended to perform ECP, though the resulting computational complexity can become prohibitive for many applications.
Learning Extended Finite State Models for Language Translation
, 1996
"... The use of Subsequential Transducers (a kind of FiniteState Models) in Automatic Translation applications is considered. A methodology that improves the performance of the learning algorithm by means of an automatic reordering of the output sentences is presented. This technique yields a greater deg ..."
Abstract
 Add to MetaCart
The use of Subsequential Transducers (a kind of FiniteState Models) in Automatic Translation applications is considered. A methodology that improves the performance of the learning algorithm by means of an automatic reordering of the output sentences is presented. This technique yields a greater degree of synchrony between the input and output samples. The proposedapproachleads to a reduction in the number of samples necessary to learn the transducer and a reduction in the size of the model so obtained.
TransducerLearning Experiments on Language Understanding
, 1998
"... The interest in using FiniteState Models in a large variety of applications is recently growing as more powerful techniques for learning them from examples have been developed. Language Understanding can be approached this way as a problem of language translation in which the target language is ..."
Abstract
 Add to MetaCart
The interest in using FiniteState Models in a large variety of applications is recently growing as more powerful techniques for learning them from examples have been developed. Language Understanding can be approached this way as a problem of language translation in which the target language is a formal language rather than a natural one. Finitestate transducers are used to model the translation process, and are automatically learned from training data consisting of pairs of naturallanguage/formallanguage sentences. The need for training data is dramatically reduced by performing a twolevel learning process based on lexical/phrase categorization. Successful experiments are presented on a task consisting in the "understanding" of Spanish naturallanguage sentences describing dates and times, where the target formal language is the one used in the popular Unix command "at".
Learning FiniteState Models for Language Understanding
, 1998
"... Language Understanding in limited domains is here approached as a problem of language translation in which the target language is a formal language rather than a natural one. Finitestate transducers are used to model the translation process. Furthermore, these models are automatically learned from ..."
Abstract
 Add to MetaCart
Language Understanding in limited domains is here approached as a problem of language translation in which the target language is a formal language rather than a natural one. Finitestate transducers are used to model the translation process. Furthermore, these models are automatically learned from training data consisting of pairs of naturallanguage/formallanguage sentences. The need for training data is dramatically reduced by performing a twostep learning process based on lexical/phrase categoria tion. Successful experiments are presented on a task consisting in the understanding of Spanlab naturallanguage sentences describing dates and times, where the target formal language is the one used in the popular Unix command 'at".
c © 2004 Association for Computational Linguistics Machine Translation with Inferred Stochastic FiniteState Transducers
"... Finitestate transducers are models that are being used in different areas of pattern recognition and computational linguistics. One of these areas is machine translation, in which the approaches that are based on building models automatically from training examples are becoming more and more attrac ..."
Abstract
 Add to MetaCart
Finitestate transducers are models that are being used in different areas of pattern recognition and computational linguistics. One of these areas is machine translation, in which the approaches that are based on building models automatically from training examples are becoming more and more attractive. Finitestate transducers are very adequate for use in constrained tasks in which training samples of pairs of sentences are available. A technique for inferring finitestate transducers is proposed in this article. This technique is based on formal relations between finitestate transducers and rational grammars. Given a training corpus of sourcetarget pairs of sentences, the proposed approach uses statistical alignment methods to produce a set of conventional strings from which a stochastic rational grammar (e.g., an ngram) is inferred. This grammar is finally converted into a finitestate transducer. The proposed methods are assessed through a series of machine translation experiments within the framework of the EuTrans project. 1.
unknown title
"... Reducing the time complexity of testing for local threshold testability A.N. Trahtman ..."
Abstract
 Add to MetaCart
(Show Context)
Reducing the time complexity of testing for local threshold testability A.N. Trahtman
Learning Regular Grammars to Model Musical Style: Comparing Different Coding Schemes.
, 1998
"... An application of Grammatical Inference (GI) in the field of Music Processing is presented, were Regular Grammars are used for modeling musical style. The interest in modeling musical style resides in the use of these models in applications, such as Automatic Composition and Automatic Musical St ..."
Abstract
 Add to MetaCart
(Show Context)
An application of Grammatical Inference (GI) in the field of Music Processing is presented, were Regular Grammars are used for modeling musical style. The interest in modeling musical style resides in the use of these models in applications, such as Automatic Composition and Automatic Musical Style Recognition. We have studied three GI Algorithms, which have been previously applied successfully in other fields. In this work, these algorithms have been used to learn a stochastic grammar for each of three different musical styles from examples of melodies. Then, each of the learned grammars was used to stochastically synthesize new melodies (Composition) or to classify test melodies (Style Recognition). Our previous studies in this field showed the need of a proper music coding scheme. Different coding schemes are presented and compared according to results in Composition and Style Recognition. Results from previous studies have been improved.
On the Estimation of ErrorCorrecting Parameters
, 2000
"... ErrorCorrecting (EC) techniques allow for coping with divergences in pattern strings with regard to their "standard " form as represented by the language L accepted by a regular or contextfree grammar. There are two main types of EC parsers: minimumdistance and stochastic. The latter ap ..."
Abstract
 Add to MetaCart
(Show Context)
ErrorCorrecting (EC) techniques allow for coping with divergences in pattern strings with regard to their "standard " form as represented by the language L accepted by a regular or contextfree grammar. There are two main types of EC parsers: minimumdistance and stochastic. The latter apply the maximum likelihood rule: classification into the classes of the strings in L that have the greatest probability given the strings representing unknown patterns. Stochastic models are important in pattern recognition if good estimations for their parameters are provided. The problem of parameter estimation has been well studied for stochastic grammars, but this is not the case of EC parameters. This work is aimed at providing solutions to adequately solve it. 1.