Results 1 
9 of
9
Logic Programming in a Fragment of Intuitionistic Linear Logic
"... When logic programming is based on the proof theory of intuitionistic logic, it is natural to allow implications in goals and in the bodies of clauses. Attempting to prove a goal of the form D ⊃ G from the context (set of formulas) Γ leads to an attempt to prove the goal G in the extended context Γ ..."
Abstract

Cited by 306 (40 self)
 Add to MetaCart
When logic programming is based on the proof theory of intuitionistic logic, it is natural to allow implications in goals and in the bodies of clauses. Attempting to prove a goal of the form D ⊃ G from the context (set of formulas) Γ leads to an attempt to prove the goal G in the extended context Γ ∪ {D}. Thus during the bottomup search for a cutfree proof contexts, represented as the lefthand side of intuitionistic sequents, grow as stacks. While such an intuitionistic notion of context provides for elegant specifications of many computations, contexts can be made more expressive and flexible if they are based on linear logic. After presenting two equivalent formulations of a fragment of linear logic, we show that the fragment has a goaldirected interpretation, thereby partially justifying calling it a logic programming language. Logic programs based on the intuitionistic theory of hereditary Harrop formulas can be modularly embedded into this linear logic setting. Programming examples taken from theorem proving, natural language parsing, and data base programming are presented: each example requires a linear, rather than intuitionistic, notion of context to be modeled adequately. An interpreter for this logic programming language must address the problem of splitting contexts; that is, when attempting to prove a multiplicative conjunction (tensor), say G1 ⊗ G2, from the context ∆, the latter must be split into disjoint contexts ∆1 and ∆2 for which G1 follows from ∆1 and G2 follows from ∆2. Since there is an exponential number of such splits, it is important to delay the choice of a split as much as possible. A mechanism for the lazy splitting of contexts is presented based on viewing proof search as a process that takes a context, consumes part of it, and returns the rest (to be consumed elsewhere). In addition, we use collections of Kripke interpretations indexed by a commutative monoid to provide models for this logic programming language and show that logic programs admit a canonical model.
Extending definite clause grammars with scoping constructs
 7th Int. Conf. Logic Programming
, 1990
"... Definite Clause Grammars (DCGs) have proved valuable to computational linguists since they can be used to specify phrase structured grammars. It is well known how to encode DCGs in Horn clauses. Some linguistic phenomena, such as fillergap dependencies, are difficult to account for in a completely ..."
Abstract

Cited by 25 (4 self)
 Add to MetaCart
Definite Clause Grammars (DCGs) have proved valuable to computational linguists since they can be used to specify phrase structured grammars. It is well known how to encode DCGs in Horn clauses. Some linguistic phenomena, such as fillergap dependencies, are difficult to account for in a completely satisfactory way using simple phrase structured grammar. In the literature of logic grammars there have been several attempts to tackle this problem by making use of special arguments added to the DCG predicates corresponding to the grammatical symbols. In this paper we take a different line, in that we account for fillergap dependencies by encoding DCGs within hereditary Harrop formulas, an extension of Horn clauses (proposed elsewhere as a foundation for logic programming) where implicational goals and universally quantified goals are permitted. Under this approach, fillergap dependencies can be accounted for in terms of the operational semantics underlying hereditary Harrop formulas, in a way reminiscent of the treatment of such phenomena in Generalized Phrase Structure Grammar (GPSG). The main features involved in this new formulation of DCGs are mechanisms for providing scope to constants and program clauses along with a mild use of λterms and λconversion. 1
Adding NegationasFailure to Intuitionistic Logic Programming
 Proc. NACLP
, 1992
"... Intuitionistic logic programming is an extension of Hornclause logic programming in which implications may appear "embedded" on the righthand side of a rule. Thus, rules of the form A(x) / [B(x) / C(x)] are allowed. These rules are called embedded implications . In this paper, we develop a languag ..."
Abstract

Cited by 11 (4 self)
 Add to MetaCart
Intuitionistic logic programming is an extension of Hornclause logic programming in which implications may appear "embedded" on the righthand side of a rule. Thus, rules of the form A(x) / [B(x) / C(x)] are allowed. These rules are called embedded implications . In this paper, we develop a language in which negationasfailure is combined with embedded implications in a principled way. Although this combination has been studied by other researchers, Gabbay has argued in [10] that the entire idea is logically incoherent since modus ponens would not be valid in such a system. We show how to solve this problem by drawing a distinction between rules and goals. To specify the semantics of rules and goals, we then develop an analogue of Przymusinski's perfect model semantics for stratified Hornclause logic [20]. Several modifications are necessary to adapt this idea from classical logic to intuitionistic logic, but we eventually show how to define a preferred model of a stratified intui...
Elimination of Negation in a Logical Framework
, 2000
"... Logical frameworks with a logic programming interpretation such as hereditary Harrop formulae (HHF) [15] cannot express directly negative information, although negation is a useful specification tool. Since negationasfailure does not fit well in a logical framework, especially one endowed with ..."
Abstract

Cited by 10 (3 self)
 Add to MetaCart
Logical frameworks with a logic programming interpretation such as hereditary Harrop formulae (HHF) [15] cannot express directly negative information, although negation is a useful specification tool. Since negationasfailure does not fit well in a logical framework, especially one endowed with hypothetical and parametric judgements, we adapt the idea of elimination of negation introduced in [21] for Horn logic to a fragment of higherorder HHF. This entails finding a middle ground between the Closed World Assumption usually associated with negation and the Open World Assumption typical of logical frameworks; the main technical idea is to isolate a set of programs where static and dynamic clauses do not overlap.
A Logical Semantics For Hypothetical Rulebases With Deletion
, 1997
"... This paper addresses a limitation of most deductive database systems: they ..."
Abstract

Cited by 6 (2 self)
 Add to MetaCart
This paper addresses a limitation of most deductive database systems: they
Intuitionistic Deductive Databases And The Polynomial Time Hierarchy
, 1997
"... this paper, we establish more comprehensive results by exploring the interaction of negationasfailure with a natural syntactic restriction called linearity. The main result is a tight connection between intuitionistic logic, database queries, and the polynomial time hierarchy. A tight connection w ..."
Abstract

Cited by 5 (2 self)
 Add to MetaCart
this paper, we establish more comprehensive results by exploring the interaction of negationasfailure with a natural syntactic restriction called linearity. The main result is a tight connection between intuitionistic logic, database queries, and the polynomial time hierarchy. A tight connection with secondorder logic follows as a corollary. First, we show that rulebases in our language fit neatly into a wellestablished logical frameworkintuitionistic logic. Second, we show that linearity reduces their data complexity from PSPACE to NP. Third, we show that negationasfailure increases their complexity from NP to some level in the polynomial time hierarchy (PHIER). Specifically, linear rulebases with k strata are data complete for \Sigma
A Declarative Alternative to "assert" in Logic Programming
 In Proceedings of the 1991 International Logic Programming Symposium
, 1991
"... The problem with the standard means by which Prolog programs are extended  assert  is that the construct is not semantically wellbehaved. A more elegant alternative (adopted, for example, in #Prolog) is implication with its intuitionistic meaning, but the assumptions so added to a logic progr ..."
Abstract
 Add to MetaCart
The problem with the standard means by which Prolog programs are extended  assert  is that the construct is not semantically wellbehaved. A more elegant alternative (adopted, for example, in #Prolog) is implication with its intuitionistic meaning, but the assumptions so added to a logic program are of limited applicability. We propose a new construct rule,which combines the declarative semantics of implication with some of the power of assert. Operationally, rule provides for the extension of the logic program with results that deductively follow from that program. rule, used in conjunction with higherorder programming techniques such as continuationpassing style, allows the natural and declarative formulation of a whole class of logic programs which previously required assert. Example applications include memoization, partial evaluation combined with reflection, resolution, ML type inference, and explanationbased learning. 1 Introduction There are many features of logic pr...
ExplanationBased Learning in Logic Programming Extended Abstract
, 1989
"... It has been argued in the literature that logic programming provides a uniform, expressive, and semantically clean framework for all aspects explanationbased generalization. Previous treatments, however, are inadequate in that they do not work well in difficult problem domains such as theorem provi ..."
Abstract
 Add to MetaCart
It has been argued in the literature that logic programming provides a uniform, expressive, and semantically clean framework for all aspects explanationbased generalization. Previous treatments, however, are inadequate in that they do not work well in difficult problem domains such as theorem proving or formal program development, primarily because metaprograms for such tasks in traditional logic programming languages such as Prolog are not declarative enough. In [4] we develop a higherorder approach to explanationbased generalization in λ⊔ ⊓ Prolog (an extension of λProlog by the modal ⊔ ⊓ operator) and demonstrate how previously intractable generalization problems became feasible. In this paper we review our approach and then address the problem of assimilation of generalizations. Assimilation bridges the gap between explanationbased generalization and explanationbased learning and, we believe, is too difficult for a general solution in terms of the underlying architecture, but rather must be under the programmer’s control. Our solution is to add a very limited amount of forward reasoning by extending λ⊔ ⊓ Prolog with two new constructs, rule and rule_ebg which can be given a clean declarative semantics (unlike assert). While these constructs are proposed and applied in the framework of λProlog and explanationbased learning, the underlying idea is general and might also be useful for declarative search control in languages like Prolog. 1