Results 1  10
of
12
Logic Programming in a Fragment of Intuitionistic Linear Logic
"... When logic programming is based on the proof theory of intuitionistic logic, it is natural to allow implications in goals and in the bodies of clauses. Attempting to prove a goal of the form D ⊃ G from the context (set of formulas) Γ leads to an attempt to prove the goal G in the extended context Γ ..."
Abstract

Cited by 303 (40 self)
 Add to MetaCart
When logic programming is based on the proof theory of intuitionistic logic, it is natural to allow implications in goals and in the bodies of clauses. Attempting to prove a goal of the form D ⊃ G from the context (set of formulas) Γ leads to an attempt to prove the goal G in the extended context Γ ∪ {D}. Thus during the bottomup search for a cutfree proof contexts, represented as the lefthand side of intuitionistic sequents, grow as stacks. While such an intuitionistic notion of context provides for elegant specifications of many computations, contexts can be made more expressive and flexible if they are based on linear logic. After presenting two equivalent formulations of a fragment of linear logic, we show that the fragment has a goaldirected interpretation, thereby partially justifying calling it a logic programming language. Logic programs based on the intuitionistic theory of hereditary Harrop formulas can be modularly embedded into this linear logic setting. Programming examples taken from theorem proving, natural language parsing, and data base programming are presented: each example requires a linear, rather than intuitionistic, notion of context to be modeled adequately. An interpreter for this logic programming language must address the problem of splitting contexts; that is, when attempting to prove a multiplicative conjunction (tensor), say G1 ⊗ G2, from the context ∆, the latter must be split into disjoint contexts ∆1 and ∆2 for which G1 follows from ∆1 and G2 follows from ∆2. Since there is an exponential number of such splits, it is important to delay the choice of a split as much as possible. A mechanism for the lazy splitting of contexts is presented based on viewing proof search as a process that takes a context, consumes part of it, and returns the rest (to be consumed elsewhere). In addition, we use collections of Kripke interpretations indexed by a commutative monoid to provide models for this logic programming language and show that logic programs admit a canonical model.
A SemanticHeadDriven Generation Algorithm for UnificationBased Formalisms
 IN 27TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS
, 1989
"... We present an algorithm for generating strings from logical form encodings that improves upon previous algorithms in that it places fewer restrictions on the class of grammars to which it is applicable. In particular, unlike an Earley deduction generator (Shieber, 1988), it allows use of semanticall ..."
Abstract

Cited by 42 (8 self)
 Add to MetaCart
We present an algorithm for generating strings from logical form encodings that improves upon previous algorithms in that it places fewer restrictions on the class of grammars to which it is applicable. In particular, unlike an Earley deduction generator (Shieber, 1988), it allows use of semantically nonmonotonic grammars, yet unlike topdown methods, it also permits leftrecursion. The enabling design feature of the algorithm is its implicit traversal of the analysis tree for the string being generated in a semanticheaddriven fashion.
Specifying FillerGap Dependency Parsers in a LinearLogic Programming Language
 Proceedings of the Joint International Conference and Symposium on Logic Programming
, 1992
"... An aspect of the Generalized Phrase Structure Grammar formalism proposed by Gazdar, et al. is the introduction of the notion of "slashed categories " to handle the parsing of structures, such as relative clauses, which involve unbounded dependencies. This has been implemented in Definite Clause Gram ..."
Abstract

Cited by 27 (4 self)
 Add to MetaCart
An aspect of the Generalized Phrase Structure Grammar formalism proposed by Gazdar, et al. is the introduction of the notion of "slashed categories " to handle the parsing of structures, such as relative clauses, which involve unbounded dependencies. This has been implemented in Definite Clause Grammars through the technique of gap threading, in which a difference list of extracted noun phrases (gaps) is maintained. However, this technique is cumbersome, and can result in subtle soundness problems in the implemented grammars. Miller and Pareschi have proposed a method of implementing gap threading at the logical level in intuitionistic logic. Unfortunately that implementation itself suffered from serious problems, which the authors recognized. This paper builds on work first presented with Miller in which we developed a fillergap dependency parser in Girard's linear logic. This implementation suffers from none of the pitfalls of either the traditional implementation, or the intuitioni...
An O(n³) AgendaBased Chart Parser for Arbitrary Probabilistic ContextFree Grammars
, 2001
"... While O(n³) methods for parsing probabilistic contextfree grammars (PCFGs) are well known, a tabular parsing framework for arbitrary PCFGs which allows for bottonup, topdown, and other parsing strategies, has not yet been provided. This paper presents such an algorithm, and shows its correctness ..."
Abstract

Cited by 8 (1 self)
 Add to MetaCart
While O(n³) methods for parsing probabilistic contextfree grammars (PCFGs) are well known, a tabular parsing framework for arbitrary PCFGs which allows for bottonup, topdown, and other parsing strategies, has not yet been provided. This paper presents such an algorithm, and shows its correctness and advantages over prior work. The paper finishes by bringing out the connections between the algorithm and work on hypergraphs, which permits us to extend the presented Viterbi (best parse) algorithm to an inside (total probability) algorithm.
GTU  A workbench for the development of natural language grammars
, 1995
"... In this report we present a Prolog tool for the development and testing of natural language grammars called GTU (German: GrammatikTestumgebung; grammar test environment). GTU offers a windoworiented user interface that allows the development and testing of natural language grammars under three for ..."
Abstract

Cited by 5 (4 self)
 Add to MetaCart
In this report we present a Prolog tool for the development and testing of natural language grammars called GTU (German: GrammatikTestumgebung; grammar test environment). GTU offers a windoworiented user interface that allows the development and testing of natural language grammars under three formalisms. In particular it contains a collection of German test sentences and two types of German lexicons. Both of the lexicons can be adapted to a given grammar via an integrated lexicon interface. GTU has been implemented in Prolog both under DOS and UNIX. It was originally developed as a tutoring tool to support university courses on syntax analysis but in its UNIXversion it allows for the development of large grammars. 1 Introduction Any computer system that analyses natural language input (be it a grammar checker or a tool for machine aided translation or the like) needs a formal grammar in order to map the input to a structure that groups it in some meaningful way. However, it is by ...
A Cognitive Model of Sentence Interpretation: the Construction Grammar approach
, 1993
"... This paper describes a new, psychologicallyplausible model of human sentence interpretation, based on a new model of linguistic structure, Construction Grammar. This online, parallel, probabilistic interpreter accounts for a wide variety of psycholinguistic results on lexical access, idiom process ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
This paper describes a new, psychologicallyplausible model of human sentence interpretation, based on a new model of linguistic structure, Construction Grammar. This online, parallel, probabilistic interpreter accounts for a wide variety of psycholinguistic results on lexical access, idiom processing, parsing preferences, and studies of gapfilling and other valence ambiguities, including various frequency effects. We show that many of these results derive from the fundamental assumptions of Construction Grammar that lexical idioms, idioms, and syntactic structures are uniformly represented as grammatical constructions, and argue for the use of probabilisticallyenriched grammars and interpreters as models of human knowledge of and processing of language. Submitted to Cognitive Science ii 1 Introduction In the last twenty years, the field of cognitive science has seen an explosion in the number of computational models of cognitive processing. This is particularly true in the mode...
Compiling Feature Structures into Terms: an Empirical Study in Prolog
 Recommendation ITUT H.262 (MPEG 2), International Standard ISO/IEC
, 1993
"... This paper explores the issue of feature structure representation and its effect on the efficiency of parsing within the context of prolog: it compares four different representation systems whilst holding the parsing strategy fixed. The representations are: i) interpreted patr equations; ii) partial ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
This paper explores the issue of feature structure representation and its effect on the efficiency of parsing within the context of prolog: it compares four different representation systems whilst holding the parsing strategy fixed. The representations are: i) interpreted patr equations; ii) partially evaluated patr feature structures; iii) feature structures compiled to lists; and iv) feature structures compiled to flat terms. The parser uses a simple shiftreduce strategy with a grammar based loosely on Generalized Phrase Structure Grammar (gpsg). I begin by considering why partial information is important in computational linguistics and compare directed acyclic graphs (dags) and fixed arity terms as representations for partial information. I present details of the basic compiler and then go on to consider the four representations in detail. I assume a familiarity with prolog notation and with the basic notions of feature structure representation. An emergent property of the compila...
, Breck Baldwin, Jeffrey C. Reynar and B. Srinivas
 Proceedings of the International Workshop on Lexically Driven Information Extraction
, 1996
"... We present Mother of Perl, a pattern description language developed for use in information extraction. Patterns are described in mop by lefttoright enumeration of components, with each component specified at the appropriate level of descriptive granularity. The patterns are compiled into Perl scr ..."
Abstract
 Add to MetaCart
We present Mother of Perl, a pattern description language developed for use in information extraction. Patterns are described in mop by lefttoright enumeration of components, with each component specified at the appropriate level of descriptive granularity. The patterns are compiled into Perl scripts, which perform backtracking search on the input text. mop also allows for rapid integration of a variety of analytical modules, such as partofspeech taggers and parsers. 1 Introduction Information extraction (IE) is the task of processing large volumes of texts in order to extract the information necessary to fill a predefined template or to populate a database. The process of IE is typically done by matching patterns on either the raw text or on a processed form of the text. The patterns can rely on a number of sources of linguistic information, such as morphology, partofspeech tags, fixed phrases and predicateargument relations. A central objective in information extraction is ...
A Linear Logic Treatment of Phrase Structure Grammars for . . .
, 1997
"... . A number of researchers have proposed applications of Girard 's Linear Logic [7] to computational linguistics. Most have focused primarily on the connection between linear logic and categorial grammars. In this work we show how linear logic can be used to provide an attractive encoding of phra ..."
Abstract
 Add to MetaCart
. A number of researchers have proposed applications of Girard 's Linear Logic [7] to computational linguistics. Most have focused primarily on the connection between linear logic and categorial grammars. In this work we show how linear logic can be used to provide an attractive encoding of phrase structure grammars for parsing structures involving unbounded dependencies. The resulting grammars are closely related to Generalized Phrase Structure Grammars [4, 5]. As part of the presentation we show how a variety of issues, such as island and coordination constraints can be dealt with in this model. 1 Introduction Over the last several years a number of researchers have proposed applications of Girard's Linear Logic [7] to computational linguistics. On the semantics side, Dalrymple, Lamping, Pereira, and Saraswat have shown how deduction in linear logic can be used to enforce various constraints during the construction of semantic terms [1, 2, 3]. On the syntax side, Hepple, Jo...