Results 1  10
of
92
Rule Languages and Internal Algebras for RuleBased Optimizers
 In Proc. ACM SIGMOD Int'l Conference on Management of Data
, 1996
"... Rulebased optimizers and optimizer generators use rules to specify query transformations. Rules act directly on query representations, which typically are based on query algebras. But most algebras complicate rule formulation, and rules over these algebras must often resort to calling to externally ..."
Abstract

Cited by 34 (6 self)
 Add to MetaCart
(Show Context)
Rulebased optimizers and optimizer generators use rules to specify query transformations. Rules act directly on query representations, which typically are based on query algebras. But most algebras complicate rule formulation, and rules over these algebras must often resort to calling to externally defined bodies of code. Code makes rules difficult to formulate, prove correct and reason about, and therefore compromises the effectiveness of rulebased systems. In this paper we present KOLA; a combinatorbased algebra designed to simplify rule formulation. KOLA is not a user language, and KOLA's variablefree queries are difficult for humans to read. But KOLA is an effective internal algebra because its combinatorstyle makes queries manipulable and structurally revealing. As a result, rules over KOLA queries are easily expressed without the need for supplemental code. We illustrate this point, first by showing some transformations that despite their simplicity, require head and body rou...
The Push3 execution stack and the evolution of control
 In Proc. Gen. and Evol. Comp. Conf
, 2005
"... The Push programming language was developed for use in genetic and evolutionary computation systems, as the representation within which evolving programs are expressed. It has been used in the production of several significant results, including results that were awarded a gold medal in the Human Co ..."
Abstract

Cited by 33 (9 self)
 Add to MetaCart
(Show Context)
The Push programming language was developed for use in genetic and evolutionary computation systems, as the representation within which evolving programs are expressed. It has been used in the production of several significant results, including results that were awarded a gold medal in the Human Competitive Results competition at GECCO2004. One of Push’s attractive features in this context is its transparent support for the expression and evolution of modular architectures and complex control structures, achieved through explicit code selfmanipulation. The latest version of Push, Push3, enhances this feature by permitting explicit manipulation of an execution stack that contains the expressions that are queued for execution in the interpreter. This paper provides a brief introduction to Push and to execution stack manipulation in Push3. It then presents a series of examples in which Push3 was used with a simple genetic programming system (PushGP) to evolve programs with nontrivial control structures.
Categorial Grammars, Lexical Rules and the English Predicative
 Formal Grammar: Theory and Implementation
, 1995
"... this paper, we will study the possibilities for applying lexical rules to the analysis of English syntax, and in particular the structure of the verb phrase. We will develop a lexicon whose empirical coverage extends to the full range of verb subcategories, complex adverbial phrases, auxiliaries, th ..."
Abstract

Cited by 20 (1 self)
 Add to MetaCart
this paper, we will study the possibilities for applying lexical rules to the analysis of English syntax, and in particular the structure of the verb phrase. We will develop a lexicon whose empirical coverage extends to the full range of verb subcategories, complex adverbial phrases, auxiliaries, the passive construction, yes/no questions and the particularly troublesome case of predicatives. The effect of a lexical rule, in our system, will be to produce new lexical entries from old lexical entries. The similarity between our system and the metarule system of generalized phrasestructure grammar (GPSG, as presented in Gazdar, et al. 1985) is not coincidental. Our lexical rules serve much the same purpose as metarules in GPSG, which were restricted to lexical phrase structure rules. The similarity is in a large part due to the fact that with the universal phrasestructure schemes being fixed, the role of a lexical category assignment in effect determines phrasestructure in much the same way as a lexical category entry and lexical phrasestructure rule determines lexical phrasestructure in GPSG. Our lexical rules will also bear a relationship to the lexical rules found in lexicalfunctional grammar (LFG, see Bresnan 1982), as LFG rules are driven by the grammatical role assigned to arguments. Many of our analyses were first applied to either LFG or GPSG, as these were the first serious linguistic theories based on a notion of unification. In the process of explaining the basic principles behind categorial grammar and developing our lexical rule system, we will establish a categorial grammar lexicon with coverage of English syntactic constructions comparable to that achieved within published accounts of the GPSG or LFG frameworks. Language, at its most abstract level, i...
Combinatory Representation of Mobile Processes
 In Proceedings of POPL '94
, 1994
"... A possible analogue of theory of combinators in the setting of concurrent processes is formulated. The new combinators are derived from the analysis of the operation called asynchronous name passing, just as the analysis of logical substitution gave rise to the sequential combinators. A system with ..."
Abstract

Cited by 20 (2 self)
 Add to MetaCart
A possible analogue of theory of combinators in the setting of concurrent processes is formulated. The new combinators are derived from the analysis of the operation called asynchronous name passing, just as the analysis of logical substitution gave rise to the sequential combinators. A system with seven atoms and fixed interaction rules, but with no notion of prefixing, is introduced, and is shown to be capable of representing input and output prefixes over arbitrary terms in a behaviourally correct way, just as SKcombinators are closed under functional abstraction without having it as a proper syntactic construct. The basic equational correspondence between concurrent combinators and a system of asynchronous mobile processes, as well as the embedding of the finite part of ßcalculus in concurrent combinators, is proved. These results will hopefully serve as a cornerstone for further investigation of the theoretical as well as pragmatic possibilities of the presented construction. 1 ...
A Continuum of Theories of Lambda Calculus Without Semantics
 16TH ANNUAL IEEE SYMPOSIUM ON LOGIC IN COMPUTER SCIENCE (LICS 2001), IEEE COMPUTER
, 2001
"... In this paper we give a topological proof of the following result: There exist 2 @0 lambda theories of the untyped lambda calculus without a model in any semantics based on Scott's view of models as partially ordered sets and of functions as monotonic functions. As a consequence of this resul ..."
Abstract

Cited by 19 (12 self)
 Add to MetaCart
In this paper we give a topological proof of the following result: There exist 2 @0 lambda theories of the untyped lambda calculus without a model in any semantics based on Scott's view of models as partially ordered sets and of functions as monotonic functions. As a consequence of this result, we positively solve the conjecture, stated by BastoneroGouy [6, 7] and by Berline [10], that the strongly stable semantics is incomplete. 1
Learning Programs: A Hierarchical Bayesian Approach
"... We are interested in learning programs for multiple related tasks given only a few training examples per task. Since the program for a single task is underdetermined by its data, we introduce a nonparametric hierarchical Bayesian prior over programs which shares statistical strength across multiple ..."
Abstract

Cited by 17 (1 self)
 Add to MetaCart
(Show Context)
We are interested in learning programs for multiple related tasks given only a few training examples per task. Since the program for a single task is underdetermined by its data, we introduce a nonparametric hierarchical Bayesian prior over programs which shares statistical strength across multiple tasks. The key challenge is to parametrize this multitask sharing. For this, we introduce a new representation of programs based on combinatory logic and provide an MCMC algorithm that can perform safe program transformations on this representation to reveal shared interprogram substructures. 1.
Physics, Topology, Logic and Computation: A Rosetta Stone
, 2009
"... Category theory is a very general formalism, but there is a certain special way that physicists use categories which turns out to have close analogues in topology, logic and computation. A category has objects and morphisms, which represent things and ways to go between things. In physics, the objec ..."
Abstract

Cited by 12 (1 self)
 Add to MetaCart
Category theory is a very general formalism, but there is a certain special way that physicists use categories which turns out to have close analogues in topology, logic and computation. A category has objects and morphisms, which represent things and ways to go between things. In physics, the objects are often physical systems, and the morphisms are processes turning a state of one physical system into a state of another system — perhaps
Applying Universal Algebra to Lambda Calculus
, 2008
"... The aim of this paper is double. From one side we survey the knowledge we have acquired these last ten years about the lattice of all λtheories ( = equational extensions of untyped λcalculus) and the models of lambda calculus via universal algebra. This includes positive or negative answers to s ..."
Abstract

Cited by 10 (4 self)
 Add to MetaCart
The aim of this paper is double. From one side we survey the knowledge we have acquired these last ten years about the lattice of all λtheories ( = equational extensions of untyped λcalculus) and the models of lambda calculus via universal algebra. This includes positive or negative answers to several questions raised in these years as well as several independent results, the state of the art about the longstanding open questions concerning the representability of λtheories as theories of models, and 26 open problems. On the other side, against the common belief, we show that lambda calculus and combinatory logic satisfy interesting algebraic properties. In fact the Stone representation theorem for Boolean algebras can be generalized to combinatory algebras and λabstraction algebras. In every combinatory and λabstraction algebra there is a Boolean algebra of central elements (playing the role of idempotent elements in rings). Central elements are used to represent any combinatory and λabstraction algebra as a weak Boolean product of directly indecomposable algebras (i.e., algebras which cannot be decomposed as the Cartesian product of two other nontrivial algebras). Central elements are also used to provide applications of the representation theorem to lambda calculus. We show that the indecomposable semantics (i.e., the semantics of lambda calculus given in terms of models of lambda calculus, which are directly indecomposable as combinatory algebras) includes the continuous, stable and strongly stable semantics, and the term models of all semisensible λtheories. In one of the main results of the paper we show that the indecomposable semantics is equationally incomplete, and this incompleteness is as wide as possible.
Verifying the Correctness of Compiler Transformations on Basic Blocks using Abstract Interpretation
 In Symposium on Partial Evaluation and SemanticsBased Program Manipulation (PEPM'91
, 1991
"... Interpretation Timothy S. McNerney Thinking Machines Corporation 245 First Street Cambridge, MA 02142 TimMcN@Think.COM Abstract We seek to develop thorough and reliable methods for testing compiler transformations by systematically generating a set of test cases, and then for each case, autom ..."
Abstract

Cited by 9 (0 self)
 Add to MetaCart
(Show Context)
Interpretation Timothy S. McNerney Thinking Machines Corporation 245 First Street Cambridge, MA 02142 TimMcN@Think.COM Abstract We seek to develop thorough and reliable methods for testing compiler transformations by systematically generating a set of test cases, and then for each case, automatically proving that the transformation preserves correctness. We have implemented a specialized program equivalence prover for the domain of assembly language programs emitted by the Connection Machine Fortran compiler and targeted for the CM2 massively parallel SIMD computer. Using abstract interpretation, the prover removes details such as register and stack usage, as well as explicit evaluation order within functional blocks, thereby reducing the problem to a trivial tree comparison. By performing limited loop unrolling, the prover also verifies that the compiler transformation preserves the inductive properties of simple loops. We have used this prover to successfully validate the re...