Results 1  10
of
12
Linear Objects: logical processes with builtin inheritance
, 1990
"... We present a new framework for amalgamating two successful programming paradigms: logic programming and objectoriented programming. From the former, we keep the declarative reading of programs. From the latter, we select two crucial notions: (i) the ability for objects to dynamically change their ..."
Abstract

Cited by 211 (6 self)
 Add to MetaCart
(Show Context)
We present a new framework for amalgamating two successful programming paradigms: logic programming and objectoriented programming. From the former, we keep the declarative reading of programs. From the latter, we select two crucial notions: (i) the ability for objects to dynamically change their internal state during the computation; (ii) the structured representation of knowledge, generally obtained via inheritance graphs among classes of objects. We start with the approach, introduced in concurrent logic programming languages, which identifies objects with proof processes and object states with arguments occurring in the goals of a given process. This provides a clean, sideeffect free account of the dynamic behavior of objects in terms of the search tree  the only dynamic entity in logic programming languages. We integrate this view of objects with an extension of logic programming, which we call Linear Objects, based on the possibility of having multiple literals in the head of a program clause. This contains within itself the basis for a flexible form of inheritance, and maintains the constructive property of Prolog of returning definite answer substitutions as output of the proof of nonground goals. The theoretical background for Linear Objects is Linear Logic, a logic recently introduced to provide a theoretical basis for the study of concurrency. We also show that Linear Objects can be considered a constructive restriction of full Classical Logic. We illustrate the expressive power of Linear Objects compared to Prolog by several examples from the objectoriented domain, but we also show that it can be used to provide elegant solutions for problems arising in the standard style of logic programming.
Programmable Active Memories: a Performance Assessment
 Research on Integrated Systems: Proceedings of the 1993 Symposium
, 1993
"... We present some quantitative performance measurements for the computing power of Programmable Active Memories (PAM), as introduced by [BRV 89]. Based on Programmable Gate Array (PGA) technology, the PAM is a universal hardware coprocessor closely coupled to a standard host computer. The PAM can spe ..."
Abstract

Cited by 112 (8 self)
 Add to MetaCart
We present some quantitative performance measurements for the computing power of Programmable Active Memories (PAM), as introduced by [BRV 89]. Based on Programmable Gate Array (PGA) technology, the PAM is a universal hardware coprocessor closely coupled to a standard host computer. The PAM can speed up many critical software applications running on the host, by executing part of the computations through a specific hardware PAM design. The performance measurements presented are based on two PAM architectures and ten specific applications, drawn from arithmetics, algebra, geometry, physics, biology, audio and video. Each of these PAM designs proves as fast as any reported hardware or supercomputer for the corresponding application. In cases where we could bring some genuine algorithmic innovation into the design process, the PAM has proved an order of magnitude faster than any previously existing system (see [SBV 91] and [S 92]). 1 PAM concept Like any RAM memory module, a PAM is att...
Revisiting the correspondence between cutelimination and normalisation
 In Proceedings of ICALP’2000
, 2000
"... Abstract. Cutfree proofs in Herbelin’s sequent calculus are in 11 correspondence with normal natural deduction proofs. For this reason Herbelin’s sequent calculus has been considered a privileged middlepoint between Lsystems and natural deduction. However, this bijection does not extend to pro ..."
Abstract

Cited by 14 (3 self)
 Add to MetaCart
(Show Context)
Abstract. Cutfree proofs in Herbelin’s sequent calculus are in 11 correspondence with normal natural deduction proofs. For this reason Herbelin’s sequent calculus has been considered a privileged middlepoint between Lsystems and natural deduction. However, this bijection does not extend to proofs containing cuts and Herbelin observed that his cutelimination procedure is not isomorphic to βreduction. In this paper we equip Herbelin’s system with rewrite rules which, at the same time: (1) complete in a sense the cut elimination procedure firstly proposed by Herbelin; and (2) perform the intuitionistic “fragment ” of the tqprotocol a cutelimination procedure for classical logic defined by Danos, Joinet and Schellinx. Moreover we identify the subcalculus of our system which is isomorphic to natural deduction, the isomorphism being with respect not only to proofs but also to normalisation. Our results show, for the implicational fragment of intuitionistic logic, how to embed natural deduction in the much wider world of sequent calculus and what a particular cutelimination procedure normalisation is. 1
Towards the Automation of the Design of Logic Programming Languages
 Department of Computer Science, RMIT
, 1997
"... Logic programs consist of formulas of mathematical logic and various prooftheoretic techniques can be used to design and analyse execution models for such programs. We briefly review the main problems, which are questions that are still elusive in the design of logic programming languages, from a p ..."
Abstract

Cited by 4 (4 self)
 Add to MetaCart
Logic programs consist of formulas of mathematical logic and various prooftheoretic techniques can be used to design and analyse execution models for such programs. We briefly review the main problems, which are questions that are still elusive in the design of logic programming languages, from a prooftheoretic point of view. Existing approaches and analyses which lead to the various languages are all rather sophisticated and involve complex manipulations of proofs. All are designed for analysis on paper by a human and many of them are ripe for automation. We aim to perform the automation of some aspects of prooftheoretic analyses, in order to assist in the design of logic programming languages. In this paper we describe the first steps towards the design of such an automatic analysis tool. We investigate the usage of particular proof manipulations for the analysis of logic programming strategies. We propose a more precise specification of sequent calculi inference rules that we use ...
The Paradigm of Interaction (short Version)
, 1991
"... We present a unified framework subsuming wellknown models of sequential or parallel computation, such as Turing machines and cellular automata. We propose a notion of natural encoding motivated by issues in implementation and partial evaluation. The most remarkable feature is the GirardDanosR&apo ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
We present a unified framework subsuming wellknown models of sequential or parallel computation, such as Turing machines and cellular automata. We propose a notion of natural encoding motivated by issues in implementation and partial evaluation. The most remarkable feature is the GirardDanosR'egnier criterion for deadlockfree computation. This theory is a byproduct of linear logic.
A Verified Algebra for ReadWrite Linked Data
"... The aim of this work is to verify an algebra for high level languages for reading and writing Linked Data. Linked Data refers to a collection of standards which aim to enhance the world’s data, by interlinking datasets through the Web. The starting point is as simple as using URIs as global identifi ..."
Abstract
 Add to MetaCart
(Show Context)
The aim of this work is to verify an algebra for high level languages for reading and writing Linked Data. Linked Data refers to a collection of standards which aim to enhance the world’s data, by interlinking datasets through the Web. The starting point is as simple as using URIs as global identifiers in data, but the technical challenges of managing data in this distributed setting are immense. An algebra is an essential contribution to this application domain. To verify the algebra several useful things are established. A high level language is defined that concisely captures query and update languages for Linked Data. The language is provided with a concise operational semantics. The natural notions of equivalence, contextual equivalence, is shown to coincide with the bisimulation proof technique. Ultimately, bisimulation allows the algebra proven to be correct. Some novel techniques are used in establishing these results.
From Linear Proofs to Direct Logic with exponentials
, 1997
"... . Following the idea of Linear Proofs presented in [4] we introduce the Direct Logic [14] with exponentials (DLE). The logic combines Direct Logic with the exponentials of Linear Logic. For a wellchosen subclass of formulas of this logic we provide a matrixcharacterization which can be used as a f ..."
Abstract
 Add to MetaCart
. Following the idea of Linear Proofs presented in [4] we introduce the Direct Logic [14] with exponentials (DLE). The logic combines Direct Logic with the exponentials of Linear Logic. For a wellchosen subclass of formulas of this logic we provide a matrixcharacterization which can be used as a foundation for proofsearch methods based on the connection calculus. 1 Introduction The last years have shown a growing interest in the computer science community concerning nonclassical logics [8]. Besides wellknown approaches like Girard's Linear Logic [12] the linear connection method (LCM) proposed by Bibel [4] and its underlying notion of the Linear Proof have so far not gained the deserved attention. Bibel aimed at overcoming the frameproblem [18] in deductive planning tasks, which were defined by an initial situation, situationtransform rules, called actions rules for short, and a goal situation. Formulas were used to represent (consumable) resources. To avoid frame axioms the ap...
REPORTS ON MATHEMATICAL LOGIC 32 (1998), 21–34 Francesco PAOLI SIMPLIFIED AFFINE PHASE STRUCTURES
"... ..."
(Show Context)
The authors can be contacted at the following addresses:
"... to copy in whole or in part without payment of fee is granted for nonprofit educational and research purposes provided that all such whole or partial copies include the following: a notice that such copying is by permission of the Paris Research Laboratory of Digital Equipment Centre Technique Euro ..."
Abstract
 Add to MetaCart
to copy in whole or in part without payment of fee is granted for nonprofit educational and research purposes provided that all such whole or partial copies include the following: a notice that such copying is by permission of the Paris Research Laboratory of Digital Equipment Centre Technique Europe, in RueilMalmaison, France; an acknowledgement of the authors and individual contributors to the work; and all applicable portions of the copyright notice. Copying, reproducing, or republishing for any other purpose shall require a license with payment of fee to the Paris Research Laboratory. All rights reserved. We consider the problem of pricing pathdependent contingent claims. Classically, this problem can be cast into the BlackScholes valuation framework through inclusion of the pathdependent variables into the state space. This leads to solving a degenerate advectiondiffusion Partial Differential Equation (PDE). Standard Finite Difference (FD) methods are known to be inadequate for solving such degenerate PDE. Hence, pathdependent European claims are typically priced through MonteCarlo simulation. To date, there is no numerical method for pricing pathdependent American claims. We first establish necessary and sufficient conditions amenable to a Lie algebraic characterization,
IEEE Workshop on Management of Replicated Data (Nov. 1990).
"... This work may not be copied or reproduced in whole or in part for any commercial purpose. Permission to copy in whole or in part without payment of fee is granted for nonprofit educational and research purposes provided that all such whole or partial copies include the following: a notice that such ..."
Abstract
 Add to MetaCart
This work may not be copied or reproduced in whole or in part for any commercial purpose. Permission to copy in whole or in part without payment of fee is granted for nonprofit educational and research purposes provided that all such whole or partial copies include the following: a notice that such copying is by permission of the Paris Research Laboratory of Digital Equipment Centre Technique Europe, in RueilMalmaison, France; an acknowledgement of the authors and individual contributors to the work; and all applicable portions of the copyright notice. Copying, reproducing, or republishing for any other purpose shall require a license with payment of fee to the Paris Research Laboratory. All rights reserved. The Siphon is intended to facilitate joint software development between groups working at distant sites connected by low bandwidth communication lines. It gives users the image of a single repository of individually manageable units, typically software or documentation components. Users can lock and modify each unit, the result being propagated automatically to all sites. The repository is replicated at each site, and possibly on multiple file servers for greater availability and reliability. A Siphon has been in operational use since January 1989 between three Digital research laboratories. We presently share a 2.5 GB repository of source