Results 1  10
of
22
Semiring Parsing
 Computational Linguistics
, 1999
"... this paper is that all five of these commonly computed quantities can be described as elements of complete semirings (Kuich 1997). The relationship between grammars and semirings was discovered by Chomsky and Schtitzenberger (1963), and for parsing with the CKY algorithm, dates back to Teitelbaum ( ..."
Abstract

Cited by 80 (1 self)
 Add to MetaCart
(Show Context)
this paper is that all five of these commonly computed quantities can be described as elements of complete semirings (Kuich 1997). The relationship between grammars and semirings was discovered by Chomsky and Schtitzenberger (1963), and for parsing with the CKY algorithm, dates back to Teitelbaum (1973). A complete semiring is a set of values over which a multiplicative operator and a commutative additive operator have been defined, and for which infinite summations are defined. For parsing algorithms satisfying certain conditions, the multiplicative and additive operations of any complete semiring can be used in place of/x and , and correct values will be returned. We will give a simple normal form for describing parsers, then precisely define complete semirings, and the conditions for correctness
Efficient Probabilistic TopDown and LeftCorner Parsing
 In Proceedings of the 37th Annual Meeting of the Association for Computational Linguistics
, 1999
"... This paper examines efficient predictive broadcoverage parsing without dynamic programming. In contrast to bottomup methods, depthfirst topdown paxsing produces partial parses that are fully connected trees spanning the entire left context, from which any kind of nonlocal dependency or partial s ..."
Abstract

Cited by 37 (4 self)
 Add to MetaCart
This paper examines efficient predictive broadcoverage parsing without dynamic programming. In contrast to bottomup methods, depthfirst topdown paxsing produces partial parses that are fully connected trees spanning the entire left context, from which any kind of nonlocal dependency or partial semantic interpretation can in principle be read. We contrast two predictive parsing approaches, topdown and leftcorner paxsing, and find both to be viable. In addition, we find that enhancement with nonlocal information not only improves parser accuracy, but also substantially improves the search efficiency.
Robust probabilistic predictive syntactic processing: Motivations, models, and applications. Doctoral dissertation, Brown University. (UMI: AAT 3006783
 Carnegie Mellon University
, 2001
"... 2001 This thesis by Brian E. Roark is accepted in its present form by ..."
Abstract

Cited by 15 (0 self)
 Add to MetaCart
2001 This thesis by Brian E. Roark is accepted in its present form by
On Computational Models of the Evolution of Music: From the Origins of Musical Taste to the Emergence of Grammars
, 2003
"... Evolutionary computing is a powerful tool for studying the origins and evolution of music. In this case, music is studied as an adaptive complex dynamic system and its origins and evolution are studied in the context of the cultural conventions that may emerge under a number of constraints (e.g. psy ..."
Abstract

Cited by 13 (1 self)
 Add to MetaCart
(Show Context)
Evolutionary computing is a powerful tool for studying the origins and evolution of music. In this case, music is studied as an adaptive complex dynamic system and its origins and evolution are studied in the context of the cultural conventions that may emerge under a number of constraints (e.g. psychological, physiological and ecological). This paper introduces three case studies of evolutionary modelling of music. It begins with a model for studying the role of matingselective pressure in the evolution of musical taste. Here the agents evolve “courting tunes” in a society of “male composers and “female” critics. Next, a mimetic model is introduced to study the evolution of musical expectation in a community of autonomous agents furnished with a vocal synthesizer, a hearing system and memory. Finally, an iterated learning model is proposed for studying the evolution of compositional grammars. In this case, the agents evolve grammars for composing music to express a set of emotions.
Maptool  Supporting Modular Syntax Development
 Proceedings, Compiler Construction (CC’96), volume 1060 of LNCS
, 1996
"... . In building textual translators, implementors often distinguish between a concrete syntax and an abstract syntax. The concrete syntax describes the phrase structure of the input language and the abstract syntax describes a tree structure that can be used as the basis for performing semantic comput ..."
Abstract

Cited by 10 (1 self)
 Add to MetaCart
. In building textual translators, implementors often distinguish between a concrete syntax and an abstract syntax. The concrete syntax describes the phrase structure of the input language and the abstract syntax describes a tree structure that can be used as the basis for performing semantic computations. Having two grammars imposes the requirement that there exist a mapping from the concrete syntax to the abstract syntax. The research presented in this paper led to a tool, called Maptool, that is designed to simplify the development of the two grammars. Maptool supports a modular approach to syntax development that mirrors the modularity found in semantic computations. This is done by allowing users to specify each of the syntaxes only partially as long as the sum of the fragments allows deduction of the complete syntaxes. Keywords: Abstract syntax, concrete syntax, modularity, parsing grammar, syntax development, syntax mapping, tree construction 1 Introduction The meaning of a co...
Empirical risk minimization for probabilistic grammars: Sample complexity and hardness of learning
 Computational Linguistics
, 2012
"... Probabilistic grammars are generative statistical models that are useful for compositional and sequential structures. They are used ubiquitously in computational linguistics. We present a framework, reminiscent of structural risk minimization, for empirical risk minimization of probabilistic grammar ..."
Abstract

Cited by 7 (5 self)
 Add to MetaCart
(Show Context)
Probabilistic grammars are generative statistical models that are useful for compositional and sequential structures. They are used ubiquitously in computational linguistics. We present a framework, reminiscent of structural risk minimization, for empirical risk minimization of probabilistic grammars using the logloss. We derive sample complexity bounds in this framework that apply both to the supervised setting and the unsupervised setting. By making assumptions about the underlying distribution that are appropriate for natural language scenarios, we are able to derive distributiondependent sample complexity bounds for probabilistic grammars. We also give simple algorithms for carrying out empirical risk minimization using this framework in both the supervised and unsupervised settings. In the unsupervised case, we show that the problem of minimizing empirical risk is NPhard. We therefore suggest an approximate algorithm, similar to expectationmaximization, to minimize the empirical risk. Learning from data is central to contemporary computational linguistics. It is in common in such learning to estimate a model in a parametric family using the maximum likelihood principle. This principle applies in the supervised case (i.e., using annotated
Probabilistic parsing strategies
 In 42nd Annual Meeting of the Association for Computational Linguistics
, 2004
"... We present new results on the relation between purely symbolic contextfree parsing strategies and their probabilistic counterparts. Such parsing strategies are seen as constructions of pushdown devices from grammars. We show that preservation of probability distribution is possible under two condi ..."
Abstract

Cited by 6 (1 self)
 Add to MetaCart
(Show Context)
We present new results on the relation between purely symbolic contextfree parsing strategies and their probabilistic counterparts. Such parsing strategies are seen as constructions of pushdown devices from grammars. We show that preservation of probability distribution is possible under two conditions, viz. the correctprefix property and the property of strong predictiveness. These results generalize existing results in the literature that were obtained by considering parsing strategies in isolation. From our general results we also derive negative results on socalled generalized LR parsing. 1
Compact NonLeftRecursive Grammars Using the Selective LeftCorner Transform and Factoring
, 2000
"... ..."
(Show Context)
Left corner transforms and finite state approximations
 Ms., Rank Xerox Research
, 1996
"... This paper describes methods for approximating contextfree grammars with finite state machines. Unlike the method derived from the LR(k) parsing algorithm described in Pereira and Wright (1991), these methods use grammar transformations based on the leftcorner grammar transform (Rosenkrantz and Le ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
(Show Context)
This paper describes methods for approximating contextfree grammars with finite state machines. Unlike the method derived from the LR(k) parsing algorithm described in Pereira and Wright (1991), these methods use grammar transformations based on the leftcorner grammar transform (Rosenkrantz and Lewis II, 1970; Aho