Results 1  10
of
18
Information flow inference for ML
 ACM Trans. Program. Lang. Syst
"... This paper presents a typebased information flow analysis for a callbyvalue λcalculus equipped with references, exceptions and letpolymorphism, which we refer to as Core ML. The type system is constraintbased and has decidable type inference. Its noninterference proof is reasonably lightweigh ..."
Abstract

Cited by 221 (4 self)
 Add to MetaCart
This paper presents a typebased information flow analysis for a callbyvalue λcalculus equipped with references, exceptions and letpolymorphism, which we refer to as Core ML. The type system is constraintbased and has decidable type inference. Its noninterference proof is reasonably lightweight, thanks to the use of a number of orthogonal techniques. First, a syntactic segregation between values and expressions allows a lighter formulation of the type system. Second, noninterference is reduced to subject reduction for a nonstandard language extension. Lastly, a semisyntactic approach to type soundness allows dealing with constraintbased polymorphism separately.
Analysis and Caching of Dependencies
, 1996
"... We address the problem of dependency analysis and caching in the context of the calculus. The dependencies of a  term are (roughly) the parts of the term that contribute to the result of evaluating it. We introduce a mechanism for keeping track of dependencies, and discuss how to use these depend ..."
Abstract

Cited by 70 (6 self)
 Add to MetaCart
We address the problem of dependency analysis and caching in the context of the calculus. The dependencies of a  term are (roughly) the parts of the term that contribute to the result of evaluating it. We introduce a mechanism for keeping track of dependencies, and discuss how to use these dependencies in caching.
Adaptive Functional Programming
 IN PROCEEDINGS OF THE 29TH ANNUAL ACM SYMPOSIUM ON PRINCIPLES OF PROGRAMMING LANGUAGES
, 2001
"... An adaptive computation maintains the relationship between its input and output as the input changes. Although various techniques for adaptive computing have been proposed, they remain limited in their scope of applicability. We propose a general mechanism for adaptive computing that enables one to ..."
Abstract

Cited by 64 (23 self)
 Add to MetaCart
An adaptive computation maintains the relationship between its input and output as the input changes. Although various techniques for adaptive computing have been proposed, they remain limited in their scope of applicability. We propose a general mechanism for adaptive computing that enables one to make any purelyfunctional program adaptive. We show
Selective Memoization
"... We present a framework for applying memoization selectively. The framework provides programmer control over equality, space usage, and identification of precise dependences so that memoization can be applied according to the needs of an application. Two key properties of the framework are that it ..."
Abstract

Cited by 44 (19 self)
 Add to MetaCart
We present a framework for applying memoization selectively. The framework provides programmer control over equality, space usage, and identification of precise dependences so that memoization can be applied according to the needs of an application. Two key properties of the framework are that it is efficient and yields programs whose performance can be analyzed using standard techniques. We describe the framework in the context of a functional language and an implementation as an SML library. The language is based on a modal type system and allows the programmer to express programs that reveal their true data dependences when executed. The SML implementation cannot support this modal type system statically, but instead employs runtime checks to ensure correct usage of primitives.
Deriving incremental programs
, 1993
"... A systematic approach i s g i v en for deriving incremental programs from nonincremental programs written in a standard functional programming language. We exploit a number of program analysis and transformation techniques and domainspeci c knowledge, centered around e ective utilization of cachin ..."
Abstract

Cited by 39 (21 self)
 Add to MetaCart
A systematic approach i s g i v en for deriving incremental programs from nonincremental programs written in a standard functional programming language. We exploit a number of program analysis and transformation techniques and domainspeci c knowledge, centered around e ective utilization of caching, in order to provide a degree of incrementality not otherwise achievable by a generic incremental evaluator. 1
An experimental analysis of selfadjusting computation
 In Proceedings of the ACM SIGPLAN Conference on Programming Language Design and Implementation (PLDI
, 2006
"... Selfadjusting computation uses a combination of dynamic dependence graphs and memoization to efficiently update the output of a program as the input changes incrementally or dynamically over time. Related work showed various theoretical results, indicating that the approach can be effective for a r ..."
Abstract

Cited by 35 (19 self)
 Add to MetaCart
Selfadjusting computation uses a combination of dynamic dependence graphs and memoization to efficiently update the output of a program as the input changes incrementally or dynamically over time. Related work showed various theoretical results, indicating that the approach can be effective for a reasonably broad range of applications. In this article, we describe algorithms and implementation techniques to realize selfadjusting computation and present an experimental evaluation of the proposed approach on a variety of applications, ranging from simple list primitives to more sophisticated computational geometry algorithms. The results of the experiments show that the approach is effective in practice, often offering orders of magnitude speedup from recomputing the output from scratch. We believe this is the first experimental evidence that incremental computation of any type is effective in practice for a reasonably broad set of applications.
Imperative selfadjusting computation
 In POPL ’08: Proceedings of the 35th annual ACM SIGPLANSIGACT symposium on Principles of programming languages
, 2008
"... Recent work on selfadjusting computation showed how to systematically write programs that respond efficiently to incremental changes in their inputs. The idea is to represent changeable data using modifiable references, i.e., a special data structure that keeps track of dependencies between read an ..."
Abstract

Cited by 27 (16 self)
 Add to MetaCart
Recent work on selfadjusting computation showed how to systematically write programs that respond efficiently to incremental changes in their inputs. The idea is to represent changeable data using modifiable references, i.e., a special data structure that keeps track of dependencies between read and writeoperations, and to let computations construct traces that later, after changes have occurred, can drive a change propagation algorithm. The approach has been shown to be effective for a variety of algorithmic problems, including some for which adhoc solutions had previously remained elusive. All previous work on selfadjusting computation, however, relied on a purely functional programming model. In this paper, we show that it is possible to remove this limitation and support modifiable references that can be written multiple times. We formalize this using a language AIL for which we define evaluation and changepropagation semantics. AIL closely resembles a traditional higherorder imperative programming language. For AIL we state and prove consistency, i.e., the property that although the semantics is inherently nondeterministic, different evaluation paths will still give observationally equivalent results. In the imperative setting where pointer graphs in the store can form cycles, our previous proof techniques do not apply. Instead, we make use of a novel form of a stepindexed logical relation that handles modifiable references. We show that AIL can be realized efficiently by describing implementation strategies whose overhead is provably constanttime per primitive. When the number of reads and writes per modifiable is bounded by a constant, we can show that change propagation becomes as efficient as it was in the pure case. The general case incurs a slowdown that is logarithmic in the maximum number of such operations. We use DFS and related algorithms on graphs as our running examples and prove that they respond to insertions and deletions of edges efficiently. 1.
Dynamic programming via static incrementalization
 In Proceedings of the 8th European Symposium on Programming
, 1999
"... Dynamic programming is an important algorithm design technique. It is used for solving problems whose solutions involve recursively solving subproblems that share subsubproblems. While a straightforward recursive program solves common subsubproblems repeatedly and often takes exponential time, a dyn ..."
Abstract

Cited by 26 (12 self)
 Add to MetaCart
Dynamic programming is an important algorithm design technique. It is used for solving problems whose solutions involve recursively solving subproblems that share subsubproblems. While a straightforward recursive program solves common subsubproblems repeatedly and often takes exponential time, a dynamic programming algorithm solves every subsubproblem just once, saves the result, reuses it when the subsubproblem is encountered again, and takes polynomial time. This paper describes a systematic method for transforming programs written as straightforward recursions into programs that use dynamic programming. The method extends the original program to cache all possibly computed values, incrementalizes the extended program with respect to an input increment to use and maintain all cached results, prunes out cached results that are not used in the incremental computation, and uses the resulting incremental program to form an optimized new program. Incrementalization statically exploits semantics of both control structures and data structures and maintains as invariants equalities characterizing cached results. The principle underlying incrementalization is general for achieving drastic program speedups. Compared with previous methods that perform memoization or tabulation, the method based on incrementalization is more powerful and systematic. It has been implemented and applied to numerous problems and succeeded on all of them. 1
A FineGrained Notation for Lambda Terms and Its Use in Intensional Operations
 Journal of Functional and Logic Programming
, 1996
"... We discuss issues relevant to the practical use of a previously proposed notation for lambda terms in contexts where the intensions of such terms have to be manipulated. This notation uses the `nameless' scheme of de Bruijn, includes expressions for encoding terms together with substitutions to be p ..."
Abstract

Cited by 25 (9 self)
 Add to MetaCart
We discuss issues relevant to the practical use of a previously proposed notation for lambda terms in contexts where the intensions of such terms have to be manipulated. This notation uses the `nameless' scheme of de Bruijn, includes expressions for encoding terms together with substitutions to be performed on them and contains a mechanism for combining such substitutions so that they can be effected in a common structure traversal. The combination mechanism is a general one and consequently difficult to implement. We propose a simplification to it that retains its functionality in situations that occur commonly in fireduction. We then describe a system for annotating terms to determine if they can be affected by substitutions generated by external ficontractions. These annotations can lead to a conservation of space and time in implementations of reduction by permitting substitutions to be performed trivially in certain situations. The use of the resulting notation in the reduction...
Discovering auxiliary information for incremental computation
 In Symp. on Princ. of Prog. Lang
, 1996
"... This paper presents program analyses and transformations that discover a general class of auxiliary information for any incremental computation problem. Combining these techniques with previous techniques for caching intermediate results, we obtain a systematic approach that transforms nonincrementa ..."
Abstract

Cited by 22 (12 self)
 Add to MetaCart
This paper presents program analyses and transformations that discover a general class of auxiliary information for any incremental computation problem. Combining these techniques with previous techniques for caching intermediate results, we obtain a systematic approach that transforms nonincremental programs into e cient incremental programs that use and maintain useful auxiliary information as well as useful intermediate results. The use of auxiliary information allows us to achieve a greater degree of incrementality than otherwise possible. Applications of the approach i nclude strength reduction in optimizing compilers and nite di erencing in transformational programming. 1