Results 1  10
of
15
Regular models of phonological rule systems." Paper presented to
 Oxford University
, 1988
"... This paper presents a set of mathematical and computational tools for manipulating and reasoning about regular languages and regular relations and argues that they provide a solid basis for computational phonology. It shows in detail how this framework applies to ordered sets of contextsensitive re ..."
Abstract

Cited by 347 (5 self)
 Add to MetaCart
This paper presents a set of mathematical and computational tools for manipulating and reasoning about regular languages and regular relations and argues that they provide a solid basis for computational phonology. It shows in detail how this framework applies to ordered sets of contextsensitive rewriting rules and also to grammars in Koskenniemi's twolevel formalism. This analysis provides a common representation of phonological constraints that supports efficient generation and recognition by a single simple interpreter. 1.
OneLevel Phonology: Autosegmental Representations and Rules as Finite Automata
 Computational Linguistics
, 1992
"... this paper we present a finitestate model of phonology in which automata are the descriptions and tapes (or strings) are the objects being described. This provides the formal semantics for an autosegmental phonology without structurechanging rules. Logical operations on the phonological domainsu ..."
Abstract

Cited by 41 (3 self)
 Add to MetaCart
(Show Context)
this paper we present a finitestate model of phonology in which automata are the descriptions and tapes (or strings) are the objects being described. This provides the formal semantics for an autosegmental phonology without structurechanging rules. Logical operations on the phonological domainsuch as conjunction, disjunction, and negationmake sense since the phonological domain consists of descriptions rather than objects. These operations as applied to automata are the straightforward operations of intersection, union, and complement. If the arrow in a rewrite rule is viewed as logical implication, then a phonological rule can also be represented as an automaton, albeit a less restrictive automaton than would be required for a lexical representation. The model is then compared with the transducer models for autosegmental phonology of Kay (1987), Kornai (1991), and Wiebe (1992). We conclude that the declarative approach to phonology presents an attractive way of extending finitestate techniques to autosegmental phonology while remaining within the confines of regular grammar
XUXEN: A Spelling Checker/Corrector for Basque Based on TwoLevel Morphology
, 1992
"... The application of the formalism of twolevel morphology to Basque and its use in the elaboration of the XUXEN spelling checker/corrector are described. This application is intended to cover a large part of the language. Because Basque is a highly inflected language, the approach of spelling checkin ..."
Abstract

Cited by 7 (4 self)
 Add to MetaCart
The application of the formalism of twolevel morphology to Basque and its use in the elaboration of the XUXEN spelling checker/corrector are described. This application is intended to cover a large part of the language. Because Basque is a highly inflected language, the approach of spelling checking and correction has been conceived as a byproduct of a general purpose morphological analyzer/generator. This analyzer is taken as a basic tool for current and future work on automatic processing of Basque. An extension for continuation class specifications in order to deal with longdistance dependencies is proposed. This extension consists basically of two features added to the standard formalism which allow the lexicon builder to make explicit the interdependencies of morphemes. Userlexicons can be interactively enriched with new entries enabling the checker from then on to recognize all the possible flexions derived from them. Due to a late process of standardization of the language, ...
A short history of twolevel morphology
 In Xerox Research
, 2001
"... Twenty years ago morphological analysis of natural language was a challenge to computational linguists. Simple cutandpaste programs could be and were written to analyze strings in particular languages, but there was no general languageindependent method available. Furthermore, cutandpaste prog ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
(Show Context)
Twenty years ago morphological analysis of natural language was a challenge to computational linguists. Simple cutandpaste programs could be and were written to analyze strings in particular languages, but there was no general languageindependent method available. Furthermore, cutandpaste programs for analysis were not reversible, they could not be used to generate words. Generative phonologists of that time described morphological alternations by means of ordered rewrite rules, but it was not understood how such rules could be used for analysis. This was the situation in the spring of 1981 when Kimmo Koskenniemi came to a conference on parsing that Lauri Karttunen had organized at the University of Texas at Austin. Also at the same conference were two Xerox researchers from Palo Alto, Ronald M. Kaplan and Martin Kay. The four Ks discovered that all of them were interested and had been working on the problem of morphological analysis. Koskenniemi went on to Palo Alto to visit Kay and Kaplan at PARC. This was the beginning of TwoLevel Morphology, the first general model in the history of computational linguistics for the analysis and generation of morphologically complex languages. The languagespecific components, the lexicon and the rules, were combined with a runtime engine applicable to all languages. In this article we trace the development of the finitestate technology that TwoLevel Morphology is based on. 1 The Origins Traditional phonological grammars, formalized in the 1960s by Noam Chomsky and Morris Halle (Chomsky and Halle, 1968) , consisted of an ordered sequence of rewrite rules that converted abstract phonological representations into surface forms through a series of intermediate representations. Such rules have the general form x> y / z w where x, y, z, and w can be arbitrarily complex strings or featurematrices. In mathematical linguistics (Partee et al., 1993), such rules are called CONTEXTSENSITIVE REWRITE RULES, and they are more powerful than regular expressions or contextfree rewrite rules.
Word Play
 ACL LIFETIME ACHIEVEMENT AWARD
, 2006
"... This article is a perspective on some important developments in semantics and in computational linguistics over the past forty years. It reviews two lines of research that lie at the opposite ends on the field: semantics and morphology. The semantic part deals with issues from the 1970s such as disc ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
This article is a perspective on some important developments in semantics and in computational linguistics over the past forty years. It reviews two lines of research that lie at the opposite ends on the field: semantics and morphology. The semantic part deals with issues from the 1970s such as discourse referents, implicative verbs, presuppositions, and questions. The second part presents a brief history of the application of finitestate transducers to linguistic analysis starting with the advent of twolevel morphology in the early 1980s and culminating in successful commercial applications in the 1990s. It offers some commentary on the relationship, or the lack thereof, between computational and paperandpencil linguistics. The final section returns to the semantic issues and their application to currently popular tasks such as textual inference and question answering.
ACL Lifetime Achievement Award Word Play
"... This article is a perspective on some important developments in semantics and in computational linguistics over the past forty years. It reviews two lines of research that lie at opposite ends of the field: semantics and morphology. The semantic part deals with issues from the 1970s such as discours ..."
Abstract
 Add to MetaCart
(Show Context)
This article is a perspective on some important developments in semantics and in computational linguistics over the past forty years. It reviews two lines of research that lie at opposite ends of the field: semantics and morphology. The semantic part deals with issues from the 1970s such as discourse referents, implicative verbs, presuppositions, and questions. The second part presents a brief history of the application of finitestate transducers to linguistic analysis starting with the advent of twolevel morphology in the early 1980s and culminating in successful commercial applications in the 1990s. It offers some commentary on the relationship, or the lack thereof, between computational and paperandpencil linguistics. The final section returns to the semantic issues and their application to currently popular tasks such as textual inference and question answering. 1. Prologue Thirtyeight years ago, in the summer of 1969 at the second meeting of COLING in S˚angaSäby in Sweden, I stood for the first time in front of a computational audience and started my talk on Discourse Referents by reading the following passage (Karttunen 1976):
A Method for Compiling Twolevel Rules with Multiple Contexts
"... A novel method is presented for compiling twolevel rules which have multiple context parts. The same method can also be applied to the resolution of socalled rightarrow rule conflicts. The method makes use of the fact that one can efficiently compose sets of twolevel rules with a lexicon transduc ..."
Abstract
 Add to MetaCart
(Show Context)
A novel method is presented for compiling twolevel rules which have multiple context parts. The same method can also be applied to the resolution of socalled rightarrow rule conflicts. The method makes use of the fact that one can efficiently compose sets of twolevel rules with a lexicon transducer. By introducing variant characters and using simple preprocessing of multicontext rules, all rules can be reduced into singlecontext rules. After the modified rules have been combined with the lexicon transducer, the variant characters may be reverted back to the original surface characters. The proposed method appears to be efficient but only partial evidence is presented yet. 1
Languages Generated by TwoLevel Morphological Rules
"... The twolevel model.of morphology and phonology arose from work on finitestate machine descriptions of phonological phenomena. However, the twolevel rule notation can be given a precise declarative semantics in terms of the segmentation of sequences of pairs of symbols, quite independently of an ..."
Abstract
 Add to MetaCart
The twolevel model.of morphology and phonology arose from work on finitestate machine descriptions of phonological phenomena. However, the twolevel rule notation can be given a precise declarative semantics in terms of the segmentation of sequences of pairs of symbols, quite independently of any computational representation as sets of finitestate transducers. Thus defined, the twolevel model can be shown to be less powerful, in terms of weak generative capacity, than parallel intersections of arbitrary finitestate transducers without empty transitions (the usual computational representation). However, if a special boundary symbol is permitted, the full family of regular languages can be generated. Twolevel morphological grammars may, without loss of generality, be written in a simplified normal form. The set of twolevel generated languages can be shown to be closed under intersection, but not under union or complementation. 1. Background Koskenniemi (1983a, 1983b, 1984) proposed a rulesystem for describing morphological regularities in a language, depending centrally on the idea of matching two sequences of symbolsa lexical string (made up of the lexical forms of morphemes) and a surface string (the sequence of characters in the normal, inflected, form of the word).