Results 1  10
of
11
Interactive code generation
 Master’s thesis, EPFL
, 2013
"... This thesis presents two approaches to code generation (synthesis) along with a discussion of other related and influential works, their ideas and relations to these approaches. The common goal of these approaches is to efficiently and effectively assist developers in software development by synthes ..."
Abstract

Cited by 3 (3 self)
 Add to MetaCart
(Show Context)
This thesis presents two approaches to code generation (synthesis) along with a discussion of other related and influential works, their ideas and relations to these approaches. The common goal of these approaches is to efficiently and effectively assist developers in software development by synthesizing code for them and save their efforts. The two presented approaches differ in the nature of the synthesis problem they try to solve. They are targeted to be useful in different scenarios, apply different set of techniques and can even be complementary. The first approach to code synthesis relies on typing information of a program to define and drive the synthesis process. The main requirement imposes that synthesized solutions have the desired type. This can aid developers in many scenarios of modern software development which typically involves composing functionality from existing libraries which expose a rich API. Our observation is that the developer usually does not know the right combination for composing API calls but knows the type of the desired expression. With the basis being the wellknown type inhabitation
Very simple Chaitin machines for concrete AIT
 Fundamenta Informaticae
, 2005
"... Abstract. In 1975, Chaitin introduced his celebrated Omega number, the halting probability of a universal Chaitin machine, a universal Turing machine with a prefixfree domain. The Omega number’s bits are algorithmically random—there is no reason the bits should be the way they are, if we define “re ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
(Show Context)
Abstract. In 1975, Chaitin introduced his celebrated Omega number, the halting probability of a universal Chaitin machine, a universal Turing machine with a prefixfree domain. The Omega number’s bits are algorithmically random—there is no reason the bits should be the way they are, if we define “reason ” to be a computable explanation smaller than the data itself. Since that time, only two explicit universal Chaitin machines have been proposed, both by Chaitin himself. Concrete algorithmic information theory involves the study of particular universal Turing machines, about which one can state theorems with specific numerical bounds, rather than include terms like O(1). We present several new tiny Chaitin machines (those with a prefixfree domain) suitable for the study of concrete algorithmic information theory. One of the machines, which we call Keraia, is a binary encoding of lambda calculus based on a curried lambda operator. Source code is included in the appendices. We also give an algorithm for restricting the domain of blankendmarker machines to a prefixfree domain over an alphabet that does not include the endmarker; this allows one to take many universal Turing machines and construct universal Chaitin machines from them. 1.
Truth and Light: Physical Algorithmic Randomness
, 2005
"... This thesis examines some problems related to Chaitin's Ω number. In the first section, we describe several new minimalist prefixfree machines suitable for the study of concrete algorithmic information theory; the halting probabilities of these machines are all Ω numbers. In the second part, ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
This thesis examines some problems related to Chaitin's Ω number. In the first section, we describe several new minimalist prefixfree machines suitable for the study of concrete algorithmic information theory; the halting probabilities of these machines are all Ω numbers. In the second part, we show that when such a sequence is the result given by a measurement of a system, the system itself can be shown to satisfy an uncertainty principle equivalent to Heisenberg's uncertainty principle. This uncertainty principle also implies Chaitin's strongest form of incompleteness. In the last part, we show that Ω can be written as an infinite product over halting programs; that there exists a "natural," or basefree formulation that does not (directly) depend on the alphabet of the universal prefixfree machine; that Tadaki's generalized halting probability is welldefined even for arbitrary univeral Turing machines and the plain complexity; and finally, that the natural generalized halting probability can be written as an infinite product over primes and has the form of a zeta function whose zeros encode halting information. We conclude with some speculation about physical systems in which partial randomness could arise, and identify many open problems.
Typed SelfInterpretation by Pattern Matching
"... Selfinterpreters can be roughly divided into two sorts: selfrecognisers that recover the input program from a canonical representation, and selfenactors that execute the input program. Major progress for staticallytyped languages was achieved in 2009 by Rendel, Ostermann, and Hofer who presented ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
(Show Context)
Selfinterpreters can be roughly divided into two sorts: selfrecognisers that recover the input program from a canonical representation, and selfenactors that execute the input program. Major progress for staticallytyped languages was achieved in 2009 by Rendel, Ostermann, and Hofer who presented the first typed selfrecogniser that allows representations of different terms to have different types. A key feature of their type system is a type:type rule that renders the kind system of their language inconsistent. In this paper we present the first staticallytyped language that not only allows representations of different terms to have different types, and supports a selfrecogniser, but also supports a selfenactor. Our language is a factorisation calculus in the style of Jay and GivenWilson, a combinatory calculus with a factorisation operator that is powerful enough to support the patternmatching functions necessary for a selfinterpreter. This allows us to avoid a type:type rule. Indeed, the types of System F are sufficient. We have implemented our approach and our experiments support the theory.
MUSIC ANALYSIS AND KOLMOGOROV COMPLEXITY
"... The goal of music analysis is to find the most satisfying explanations for musical works. It is proposed that this can best be achieved by attempting to write computer programs that are as short as possible and that generate representations that are as detailed as possible of the music to be explain ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
(Show Context)
The goal of music analysis is to find the most satisfying explanations for musical works. It is proposed that this can best be achieved by attempting to write computer programs that are as short as possible and that generate representations that are as detailed as possible of the music to be explained. The theory of Kolmogorov complexity suggests that the length of such a program can be used as a measure of the complexity of the analysis that it represents. The analyst therefore needs a way to measure the length of a program so that this length reflects the quality of the analysis that the program represents. If such an effective measure of analysis quality can be found, it could be used in a system that automatically finds the optimal analysis for any passage of music. Measuring program length in terms of number of sourcecode characters is shown to be problematic and an expression is proposed that overcomes some but not all of these problems. It is suggested that the solutions to the remaining problems may lie either in the field of concrete Kolmogorov complexity or in the design of languages specialized for expressing musical structure. 1.
Luckiness and Regret in Minimum Description Length Inference
, 2009
"... Minimum Description Length (MDL) inference is based on the intuition that understanding the available data can be defined in terms of the ability to compress the data, i.e. to describe it in full using a shorter representation. This brief introduction discusses the design of the various codes used t ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
(Show Context)
Minimum Description Length (MDL) inference is based on the intuition that understanding the available data can be defined in terms of the ability to compress the data, i.e. to describe it in full using a shorter representation. This brief introduction discusses the design of the various codes used to implement MDL, focusing on the philosophically intriguing concepts of luckiness and regret: a good MDL code exhibits good performance in the worst case over all possible data sets, but achieves even better performance when the data turn out to be simple (although we suggest making no a priori assumptions to that effect). We then discuss how data compression relates to performance in various learning tasks, including parameter estimation, parametric and nonparametric model selection and sequential prediction of outcomes from an unknown source. Last, we briefly outline the history of MDL and its technical and philosophical relationship to other approaches to learning such as Bayesian, frequentist and prequential statistics. 1
IOS Press Very simple Chaitin machines for concrete AIT
, 2005
"... Abstract. In 1975, Chaitin introduced his celebrated Omega number, the halting probability of a universal Chaitin machine, a universal Turing machine with a prefixfree domain. The Omega number’s bits are algorithmically random—there is no reason the bits should be the way they are, if we define “re ..."
Abstract
 Add to MetaCart
Abstract. In 1975, Chaitin introduced his celebrated Omega number, the halting probability of a universal Chaitin machine, a universal Turing machine with a prefixfree domain. The Omega number’s bits are algorithmically random—there is no reason the bits should be the way they are, if we define “reason ” to be a computable explanation smaller than the data itself. Since that time, only two explicit universal Chaitin machines have been proposed, both by Chaitin himself. Concrete algorithmic information theory involves the study of particular universal Turing machines, about which one can state theorems with specific numerical bounds, rather than include terms like O(1). We present several new tiny Chaitin machines (those with a prefixfree domain) suitable for the study of concrete algorithmic information theory. One of the machines, which we call Keraia, is a binary encoding of lambda calculus based on a curried lambda operator. Source code is included in the appendices. We also give an algorithm for restricting the domain of blankendmarker machines to a prefixfree domain over an alphabet that does not include the endmarker; this allows one to take many universal Turing machines and construct universal Chaitin machines from them. 1.
Towards a stable definition of KolmogorovChaitin complexity
, 2008
"... Although information content is invariant up to an additive constant, the range of possible additive constants applicable to programming languages is so large that in practice it plays a major role in the actual evaluation of K(s), the KolmogorovChaitin complexity of a string s. Some attempts have ..."
Abstract
 Add to MetaCart
Although information content is invariant up to an additive constant, the range of possible additive constants applicable to programming languages is so large that in practice it plays a major role in the actual evaluation of K(s), the KolmogorovChaitin complexity of a string s. Some attempts have been made to arrive at a framework stable enough for a concrete definition of K, independent of any constant under a programming language, by appealing to the naturalness of the language in question. The aim of this paper is to present an approach to overcome the problem by looking at a set of models of computation converging in output probability distribution such that that naturalness can be inferred, thereby providing a framework for a stable definition of K under the set of convergent models of computation.
unknown title
, 2013
"... Lambda calculus is the basis of functional programming and higher order proof assistants. However, little is known about combinatorial properties of lambda terms, in particular, about their asymptotic distribution and random generation. This paper tries to answer questions like: How many terms of a ..."
Abstract
 Add to MetaCart
Lambda calculus is the basis of functional programming and higher order proof assistants. However, little is known about combinatorial properties of lambda terms, in particular, about their asymptotic distribution and random generation. This paper tries to answer questions like: How many terms of a given size are there? What is a “typical ” structure of a simply typable term? Despite their ostensible simplicity, these questions still remain unanswered, whereas solutions to such problems are essential for testing compilers and optimizing programs whose expected efficiency depends on the size of terms. Our approach toward the aforementioned problems may be later extended to any language with bound variables, i.e., with scopes and declarations. This paper presents two complementary approaches: one, theoretical, uses complex analysis and generating functions, the other, experimental, is based on a generator of lambda terms. Thanks to de Bruijn indices, we provide three families of formulas for the number of closed lambda terms of a given size and we give four relations between these numbers which have interesting combinatorial interpretations. As a byproduct of the counting formulas, we design an algorithm for generating λterms. Performed tests provide us with experimental data, like the average depth