Results 1 
8 of
8
Some directed graph algorithms and their application to pointer analysis (work in progress
, 2004
"... This thesis is focused on improving execution time and precision of scalable pointer analysis. Such an analysis statically determines the targets of all pointer variables in a program. We formulate the analysis as a directed graph problem, where the solution can be obtained by a computation similar, ..."
Abstract

Cited by 10 (3 self)
 Add to MetaCart
This thesis is focused on improving execution time and precision of scalable pointer analysis. Such an analysis statically determines the targets of all pointer variables in a program. We formulate the analysis as a directed graph problem, where the solution can be obtained by a computation similar, in many ways, to transitive closure. As with transitive closure, identifying strongly connected components and transitive edges offers significant gains. However, our problem differs as the computation can result in new edges being added to the graph and, hence, dynamic algorithms are needed to efficiently identify these structures. Thus, pointer analysis has often been likened to the dynamic transitive closure problem. Two new algorithms for dynamically maintaining the topological order of a directed graph are presented. The first is a unit change algorithm, meaning the solution must be recomputed immediately following an edge insertion. While this has a marginally inferior worsecase time bound, compared with a previous solution, it is far simpler to implement and has fewer restrictions. For these reasons, we find it to be faster in practice and provide an experimental study over random graphs to support this. Our second is a batch algorithm, meaning the solution can be updated after
Amortized Resource Analysis with Polymorphic Recursion and Partial BigStep Operational Semantics  Extended Version
"... This paper studies the problem of statically determining upper bounds on the resource consumption of firstorder functional programs. A previous work approached the problem with an automatic typebased amortized analysis for polynomial resource bounds. The analysis is parametric in the resource and ..."
Abstract

Cited by 4 (3 self)
 Add to MetaCart
This paper studies the problem of statically determining upper bounds on the resource consumption of firstorder functional programs. A previous work approached the problem with an automatic typebased amortized analysis for polynomial resource bounds. The analysis is parametric in the resource and can be instantiated to heap space, stack space, or clock cycles. Experiments with a prototype implementation have shown that programs are analyzed efficiently and that the computed bounds exactly match the measured worstcase resource behavior for many functions. This paper describes the inference algorithm that is used in the implementation of the system. It can deal with resourcepolymorphic recursion which is required in the type derivation of many functions. The computation of the bounds is fully automatic if a maximal degree of the polynomials is given. The soundness of the inference is proved with respect to a novel operational semantics for partial evaluations to show that the inferred bounds hold for terminating as well as nonterminating computations. A corollary is that runtime bounds also establish the termination of programs.
Operational semantics using the partiality monad
 In: International Conference on Functional Programming 2012, ACM Press
, 2012
"... The operational semantics of a partial, functional language is often given as a relation rather than as a function. The latter approach is arguably more natural: if the language is functional, why not take advantage of this when defining the semantics? One can immediately see that a functional seman ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
The operational semantics of a partial, functional language is often given as a relation rather than as a function. The latter approach is arguably more natural: if the language is functional, why not take advantage of this when defining the semantics? One can immediately see that a functional semantics is deterministic and, in a constructive setting, computable. This paper shows how one can use the coinductive partiality monad to define bigstep or smallstep operational semantics for lambdacalculi and virtual machines as total, computable functions (total definitional interpreters). To demonstrate that the resulting semantics are useful type soundness and compiler correctness results are also proved. The results have been implemented and checked using Agda, a dependently typed programming language and proof assistant.
Mixing Induction and Coinduction
, 2009
"... Purely inductive definitions give rise to treeshaped values where all branches have finite depth, and purely coinductive definitions give rise to values where all branches are potentially infinite. If this is too restrictive, then an alternative is to use mixed induction and coinduction. This techn ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
Purely inductive definitions give rise to treeshaped values where all branches have finite depth, and purely coinductive definitions give rise to values where all branches are potentially infinite. If this is too restrictive, then an alternative is to use mixed induction and coinduction. This technique appears to be fairly unknown. The aim of this paper is to make the technique more widely known, and to present several new applications of it, including a parser combinator library which guarantees termination of parsing, and a method for combining coinductively defined inference systems with rules like transitivity. The developments presented in the paper have been formalised and checked in Agda, a dependently typed programming language and proof assistant.
ropas.kaist.ac.kr
, 2001
"... 1.1.1 Objectives of the originally proposed research Our goal is to achieve compiler technologies suited for the global, mobile computing environment of the future. In particular, we will focus on the following three compilation problems for higherorder & typed programming languages like ML: • ..."
Abstract
 Add to MetaCart
1.1.1 Objectives of the originally proposed research Our goal is to achieve compiler technologies suited for the global, mobile computing environment of the future. In particular, we will focus on the following three compilation problems for higherorder & typed programming languages like ML: • compiler must generate safe code: not only must the compiler assure that the compiled code will not damage the host but the host must be able to verify the established safety of the incoming code. • compiler must generate small code: the code size must be as small as possible, in order to minimize the delivery cost over the network. Compact code will move swiftly over the network, arriving at the host faster than other competing code. • compiler must generate smart code: the code must be able to tailor itself to the most common inputs that occur during its use at the host. Our research position is to aggressively adopt recent progress in programming language theories into a set of practical compilation techniques. The major thrust for
Correctness and Completeness of CLP Semantics revisited with (Co)Induction
"... We propose a reformulation of the constraint logic program semantics in terms of positive and negative semantics, using a uniform inductive framework. It is a natural and elegant way to express and study correctness and completeness results. In particular we state a completeness for negative semanti ..."
Abstract
 Add to MetaCart
We propose a reformulation of the constraint logic program semantics in terms of positive and negative semantics, using a uniform inductive framework. It is a natural and elegant way to express and study correctness and completeness results. In particular we state a completeness for negative semantics by using some infinite sets of constraints. This theoretical framework is an original extension of the "Grammatical View of Logic Programming".
Design, Languages
"... We show how to combine a general purpose type system for an existing language with support for programming with binders and contexts by refining the type system of ML with a restricted form of dependent types where index objects are drawn from contextual LF. This allows the user to specify formal sy ..."
Abstract
 Add to MetaCart
We show how to combine a general purpose type system for an existing language with support for programming with binders and contexts by refining the type system of ML with a restricted form of dependent types where index objects are drawn from contextual LF. This allows the user to specify formal systems within the logical framework LF and index ML types with contextual LF objects. Our language design keeps the index language generic only requiring decidability of equality of the index language providing a modular design. To illustrate the elegance and effectiveness of our language, we give programs for closure conversion and normalization by evaluation. Our three key technical contribution are: 1) a bidirectional type system for our core language which is centered around refinement substitutions instead of constraint solving. As a consequence, type checking is decidable and easy to trust, although constraint solving may be undecidable. 2) a bigstep environment based operational semantics with environments which lends itself to efficient implementation. 3) We prove our language to be type safe and have mechanized our theoretical development in the proof assistant Coq using the fresh approach to binding.
Author manuscript, published in "22nd European Symposium on Programming (ESOP) (2013)" PrettyBigStep Semantics
, 2013
"... Abstract. In spite of the popularity of smallstep semantics, bigstep semantics remain used by many researchers. However, bigstep semantics suffer from a serious duplication problem, which appears as soon as the semantics account for exceptions and/or divergence. In particular, many premises need ..."
Abstract
 Add to MetaCart
Abstract. In spite of the popularity of smallstep semantics, bigstep semantics remain used by many researchers. However, bigstep semantics suffer from a serious duplication problem, which appears as soon as the semantics account for exceptions and/or divergence. In particular, many premises need to be copypasted across several evaluation rules. This duplication problem, which is particularly visible when scaling up to fullblown languages, results in formal definitions growing far bigger than necessary. Moreover, it leads to unsatisfactory redundancy in proofs. In this paper, we address the problem by introducing prettybigstep semantics. Prettybigstep semantics preserve the spirit of bigstep semantics, in the sense that terms are directly related to their results, but they eliminate the duplication associated with bigstep semantics. 1