Results 1 
7 of
7
Design of the Kernel Language for the Parallel Inference Machine
, 1990
"... We review the design of the concurrent logic language GHC, the basis of the kernel language for the Parallel Inference Machine being developed in the Japanese Fifth Generation Computer Systems project, and the design of the parallel language KL1, the actual kernel language being implemented and used ..."
Abstract

Cited by 49 (9 self)
 Add to MetaCart
We review the design of the concurrent logic language GHC, the basis of the kernel language for the Parallel Inference Machine being developed in the Japanese Fifth Generation Computer Systems project, and the design of the parallel language KL1, the actual kernel language being implemented and used. The key idea in the design of these languages is the separation of concurrency and parallelism. Clarication of concepts of this kind seems to play an important role in bridging the gap between parallel inference systems and knowledge information processing in a coherent manner. In particular, design of a new kernel language has always encouraged us to reexamine and reorganize various existing notions related to programming and to invent new ones.
Moded Flat GHC and Its MessageOriented Implementation Technique
, 1994
"... Concurrent processes can be used both for programming computation and for programming storage. Previous implementations of Flat GHC, however, have been tuned for computationintensive programs, and perform poorly for storageintensive programs (such as programs implementing reconfigurable data struc ..."
Abstract

Cited by 29 (10 self)
 Add to MetaCart
Concurrent processes can be used both for programming computation and for programming storage. Previous implementations of Flat GHC, however, have been tuned for computationintensive programs, and perform poorly for storageintensive programs (such as programs implementing reconfigurable data structures using processes and streams) and demanddriven programs. This paper proposes an optimization technique for programs in which processes are almost always suspended. The technique compiles unification for data transfer into message passing. Instead of reducing the number of process switching operations, the technique optimizes the cost of each process switching operation and reduces the number of cons operations for data buffering.
Concurrent Logic/Constraint Programming: The Next 10 Years
, 1999
"... Concurrent logic/constraint programming is a simple and elegant formalism of concurrency that can potentially address a lot of important future applications including parallel, distributed, and intelligent systems. Its basic concept has been extremely stable and has allowed e#cient implementatio ..."
Abstract

Cited by 11 (7 self)
 Add to MetaCart
(Show Context)
Concurrent logic/constraint programming is a simple and elegant formalism of concurrency that can potentially address a lot of important future applications including parallel, distributed, and intelligent systems. Its basic concept has been extremely stable and has allowed e#cient implementations. However, its uniqueness makes this paradigm rather di#cult to appreciate. Many people consider concurrent logic/constraint programming to have rather little to do with the rest of logic programming. There is certainly a fundamental di#erence in the view of computation, but careful study of the di#erences will lead to the understanding and the enhancing of the whole logic programming paradigm by an analytic approach. As a model of concurrency, concurrent logic/constraint programming has its own challenges to share with other formalisms of concurrency as well. They are: (1) a counterpart of #calculus in the field of concurrency, (2) a common platform for various nonsequential forms of computing, and (3) type systems that cover both logical and physical aspects of computation.
Proving Termination of GHC Programs
, 1997
"... A transformational approach for proving termination of parallel logic programs such as GHC programs is proposed. A transformation from GHC programs to term rewriting systems is developed; it exploits the fact that unications in GHCresolution correspond to matchings. The termination of a GHC program ..."
Abstract

Cited by 8 (0 self)
 Add to MetaCart
A transformational approach for proving termination of parallel logic programs such as GHC programs is proposed. A transformation from GHC programs to term rewriting systems is developed; it exploits the fact that unications in GHCresolution correspond to matchings. The termination of a GHC program for a class of queries is implied by the termination of the resulting rewrite system. This approach facilitates the applicability of a wide range of termination techniques developed for rewrite systems in proving termination of GHC programs. The method consists of three steps: (a) deriving moding information from a given GHC program, (b) transforming the GHC program into a term rewriting system using the moding information, and nally (c) proving termination of the resulting rewrite system. Using this method, the termination of many benchmark GHC programs such as quicksort, mergesort, merge, split, fairsplit and append, etc., can be proved.
πCalculus Semantics of Moded Flat GHC
, 1995
"... This paper presents a new operational semantics for moded Flat Guarded Horn Clauses (FGHC). πcalculus is a simple model of concurrent computation based upon the notion of naming; πcalculus agents concurrently exchange names as data via names as channels. Naming enables an agent to be encapsulated ..."
Abstract
 Add to MetaCart
This paper presents a new operational semantics for moded Flat Guarded Horn Clauses (FGHC). πcalculus is a simple model of concurrent computation based upon the notion of naming; πcalculus agents concurrently exchange names as data via names as channels. Naming enables an agent to be encapsulated as a value and also can provide the execution view that independent states interact with one another. The semantics of πcalculus has been investigated intensively mainly from the algebraic point of view. Given an FGHC program, we can consider that the translated πcalculus statements represent the operational meaning of the source program. Such a πcalculus semantics has the following advantages: both processes and messages (terms) are represented in a uniform way, the semantics can specify nonlogical builtin predicates together with ordinary predicates and logical variables, and the various properties of πcalculus are wellunderstood theoretically. This paper introduces FGHC (a subse...
Real Number Computation with Committed Choice Logic Programming Languages
, 2003
"... As shown in [9], the real line can be embedded topologically in the set ?;1 of infinite sequences of f0; 1; ?g containing at most one ?. Moreover, there is a nondeterministic multiheaded machine, called an IM2machine, which operates on ?;1 and which induces the standard notion of computati ..."
Abstract
 Add to MetaCart
As shown in [9], the real line can be embedded topologically in the set ?;1 of infinite sequences of f0; 1; ?g containing at most one ?. Moreover, there is a nondeterministic multiheaded machine, called an IM2machine, which operates on ?;1 and which induces the standard notion of computation over the reals via this embedding. In this paper, we study how the behavior of an IM2machine can be expressed in "real" programming languages. When we use a lazy functional language like Haskell and represent a sequence as an infinite list, we cannot express the behavior of an IM2machine. However, when we use a logic programming language with guarded clauses and committed choice, such as Concurrent Prolog, PARLOG, and GHC (Guarded Horn Clauses), we can express the behavior of IM2machines naturally and execute them on an ordinary computer. We give some GHC program examples, such as the conversions between Graycode and the signed digit representations, and the addition function on reals. We show that GHCcomputability and IM2computability do not coincide when we consider functions on ?;1 , but they are the same when a function is defined on the set of minimal limit elements of some domain structure. In particular, they are the same when we consider realvalued functions, and thus we can use GHC programs instead of IM2machines to define computable functions.
NonStrict Execution in Parallel and Distributed Computing
, 2002
"... This paper surveys and demonstrates the power of nonstrict evaluation in applications executed on distributed architectures. We present the design, implementation, and experimental evaluation of single assignment, incomplete data structures in a distributed memory architecture and Abstract Network ..."
Abstract
 Add to MetaCart
(Show Context)
This paper surveys and demonstrates the power of nonstrict evaluation in applications executed on distributed architectures. We present the design, implementation, and experimental evaluation of single assignment, incomplete data structures in a distributed memory architecture and Abstract Network Machine (ANM). Incremental Structures (IS), Incremental Structure Software Cache (ISSC), and Dynamic Incremental Structures (DIS) provide nonstrict data access and fully asynchronous operations that make them highly suited for the exploitation of finegrain parallelism in distributed memory systems. We focus on splitphase memory operations and nonstrict information processing under a distributed address space to improve the overall system performance. A novel technique of optimization at the communication level is proposed and described. We use partial evaluation of local and remote memory accesses not only to remove much of the excess overhead of message passing, but also to reduce the number of messages when some information about the input or part of the input is known. We show that splitphase transactions of IS, together with the ability of deferring reads, allow partial evaluation of distributed programs without losing determinacy. Our experimental evaluation indicates that commodity PC clusters with both IS and a caching mechanism, ISSC, are more robust. The system can deliver speedup for both regular and irregular applications. We also show that partial evaluation of memory accesses decreases the traffic in the interconnection network and improves the performance of MPI IS and MPI ISSC applications.