Results 1 
7 of
7
Logic Programming in a Fragment of Intuitionistic Linear Logic
"... When logic programming is based on the proof theory of intuitionistic logic, it is natural to allow implications in goals and in the bodies of clauses. Attempting to prove a goal of the form D ⊃ G from the context (set of formulas) Γ leads to an attempt to prove the goal G in the extended context Γ ..."
Abstract

Cited by 306 (40 self)
 Add to MetaCart
When logic programming is based on the proof theory of intuitionistic logic, it is natural to allow implications in goals and in the bodies of clauses. Attempting to prove a goal of the form D ⊃ G from the context (set of formulas) Γ leads to an attempt to prove the goal G in the extended context Γ ∪ {D}. Thus during the bottomup search for a cutfree proof contexts, represented as the lefthand side of intuitionistic sequents, grow as stacks. While such an intuitionistic notion of context provides for elegant specifications of many computations, contexts can be made more expressive and flexible if they are based on linear logic. After presenting two equivalent formulations of a fragment of linear logic, we show that the fragment has a goaldirected interpretation, thereby partially justifying calling it a logic programming language. Logic programs based on the intuitionistic theory of hereditary Harrop formulas can be modularly embedded into this linear logic setting. Programming examples taken from theorem proving, natural language parsing, and data base programming are presented: each example requires a linear, rather than intuitionistic, notion of context to be modeled adequately. An interpreter for this logic programming language must address the problem of splitting contexts; that is, when attempting to prove a multiplicative conjunction (tensor), say G1 ⊗ G2, from the context ∆, the latter must be split into disjoint contexts ∆1 and ∆2 for which G1 follows from ∆1 and G2 follows from ∆2. Since there is an exponential number of such splits, it is important to delay the choice of a split as much as possible. A mechanism for the lazy splitting of contexts is presented based on viewing proof search as a process that takes a context, consumes part of it, and returns the rest (to be consumed elsewhere). In addition, we use collections of Kripke interpretations indexed by a commutative monoid to provide models for this logic programming language and show that logic programs admit a canonical model.
TermLabeled Categorial Type Systems
 Linguistics and Philosophy
, 1994
"... Through language, we are able to assign symbolic analyses to linguistic entities physical objects and events whose complexity has no intrinsic upper bound. Such symbolic analyses are abstract, since a single physical entity can support distinct analyses. Yet we have partial intuitive access to the ..."
Abstract

Cited by 45 (0 self)
 Add to MetaCart
Through language, we are able to assign symbolic analyses to linguistic entities physical objects and events whose complexity has no intrinsic upper bound. Such symbolic analyses are abstract, since a single physical entity can support distinct analyses. Yet we have partial intuitive access to the properties of these analyses through their projections in different 'dimensions', including the widely recognized and studied dimensions of phonology, syntax, and semantics/pragmatics. Each of these dimensions gives rise to a dimensionspecific problem of compositionality: given an analysis of a linguistic entity e, which has, for a specific dimension d the projection d(e), how do the global properties of d(e) depend on the correlative properties of the components of e, together with their mode of composition? But an additional question which we call the problem of generalized compositionality arises as well: how does composition in one dimension depend on composition in other dimensions? There are many possible answers to this question and existing grammatical architectures instantiate some of them. The question deserves to be stud
LFG Semantics via Constraints
 University of Utrecht
"... Semantic theories of natural language associate meanings with utterances by providing meanings for lexical items and rules for determining the meaning of larger units given the meanings of their parts. Traditionally, meanings are combined via function composition, which works well when consti ..."
Abstract

Cited by 42 (9 self)
 Add to MetaCart
Semantic theories of natural language associate meanings with utterances by providing meanings for lexical items and rules for determining the meaning of larger units given the meanings of their parts. Traditionally, meanings are combined via function composition, which works well when constituent structure trees are used to guide semantic composition.
Intensional verbs without typeraising or lexical ambiguity
 Logic, Language and Computation. Center for the Study of Language and Information
, 1996
"... ..."
An Evaluation of Prolog as a Tool for Natural Language Analysis
, 1990
"... This paper will thus examine the pros and cons of Prolog from the perspective of a natural language researcher, concluding that the recent advances in logic programming embodied in Prolog should provide an important impetus for further progress in the application of logic programming techniques to n ..."
Abstract
 Add to MetaCart
This paper will thus examine the pros and cons of Prolog from the perspective of a natural language researcher, concluding that the recent advances in logic programming embodied in Prolog should provide an important impetus for further progress in the application of logic programming techniques to natural language analysis.
LFG as Labeled Deduction
"... is provided in screenviewable form for personal use only by members ..."
Logic Grammars, Compositional Semantics, and Overgeneration
"... Firstorder treatments of longdistance phenomena such as relativization typically suffer from overgeneration. Higher order inspired extensions of Prolog have been proposed with varying degrees of success, but still suffer from overgeneration in the case of imbricated structures. We first propose an ..."
Abstract
 Add to MetaCart
Firstorder treatments of longdistance phenomena such as relativization typically suffer from overgeneration. Higher order inspired extensions of Prolog have been proposed with varying degrees of success, but still suffer from overgeneration in the case of imbricated structures. We first propose an Assumption Grammar based treatment which deals successfully with this case both for analysis and for generation, and which maintains semantic compositionality as well. We then propose a cleaner, true higher order logic approach which solves the same problems, we argue that this approach is superior to other kinds of grammars dealing with long distance dependencies, and we advocate the development of a mixed platform (Prolog plus continuation based assumptions) where the best of both worlds can be exploited.