Results 1 
9 of
9
Abstract interpretation based formal methods and future challenges, invited paper
 Informatics — 10 Years Back, 10 Years Ahead, volume 2000 of Lecture Notes in Computer Science
, 2001
"... Abstract. In order to contribute to the solution of the software reliability problem, tools have been designed to analyze statically the runtime behavior of programs. Because the correctness problem is undecidable, some form of approximation is needed. The purpose of abstract interpretation is to f ..."
Abstract

Cited by 28 (6 self)
 Add to MetaCart
Abstract. In order to contribute to the solution of the software reliability problem, tools have been designed to analyze statically the runtime behavior of programs. Because the correctness problem is undecidable, some form of approximation is needed. The purpose of abstract interpretation is to formalize this idea of approximation. We illustrate informally the application of abstraction to the semantics of programming languages as well as to static program analysis. The main point is that in order to reason or compute about a complex system, some information must be lost, that is the observation of executions must be either partial or at a high level of abstraction. In the second part of the paper, we compare static program analysis with deductive methods, modelchecking and type inference. Their foundational ideas are briefly reviewed, and the shortcomings of these four methods are discussed, including when they should be combined. Alternatively, since program debugging is still the main program verification
Dynamic programming via static incrementalization
 In Proceedings of the 8th European Symposium on Programming
, 1999
"... Dynamic programming is an important algorithm design technique. It is used for solving problems whose solutions involve recursively solving subproblems that share subsubproblems. While a straightforward recursive program solves common subsubproblems repeatedly and often takes exponential time, a dyn ..."
Abstract

Cited by 26 (12 self)
 Add to MetaCart
Dynamic programming is an important algorithm design technique. It is used for solving problems whose solutions involve recursively solving subproblems that share subsubproblems. While a straightforward recursive program solves common subsubproblems repeatedly and often takes exponential time, a dynamic programming algorithm solves every subsubproblem just once, saves the result, reuses it when the subsubproblem is encountered again, and takes polynomial time. This paper describes a systematic method for transforming programs written as straightforward recursions into programs that use dynamic programming. The method extends the original program to cache all possibly computed values, incrementalizes the extended program with respect to an input increment to use and maintain all cached results, prunes out cached results that are not used in the incremental computation, and uses the resulting incremental program to form an optimized new program. Incrementalization statically exploits semantics of both control structures and data structures and maintains as invariants equalities characterizing cached results. The principle underlying incrementalization is general for achieving drastic program speedups. Compared with previous methods that perform memoization or tabulation, the method based on incrementalization is more powerful and systematic. It has been implemented and applied to numerous problems and succeeded on all of them. 1
Towards automatic construction of staged compilers
 2002, proceedings of the 29th ACM SIGPLANSIGACT symposium on Principles of Programming Languages (POPL’02
, 2002
"... ..."
Strengthening invariants for efficient computation
 in Conference Record of the 23rd Annual ACM Symposium on Principles of Programming Languages
, 2001
"... This paper presents program analyses and transformations for strengthening invariants for the purpose of efficient computation. Finding the stronger invariants corresponds to discovering a general class of auxiliary information for any incremental computation problem. Combining the techniques with p ..."
Abstract

Cited by 6 (4 self)
 Add to MetaCart
This paper presents program analyses and transformations for strengthening invariants for the purpose of efficient computation. Finding the stronger invariants corresponds to discovering a general class of auxiliary information for any incremental computation problem. Combining the techniques with previous techniques for caching intermediate results, we obtain a systematic approach that transforms nonincremental programs into ecient incremental programs that use and maintain useful auxiliary information as well as useful intermediate results. The use of auxiliary information allows us to achieve a greater degree of incrementality than otherwise possible. Applications of the approach include strength reduction in optimizing compilers and finite differencing in transformational programming.
Solving Regular Tree Grammar Based Constraints
 In Proceedings of the 8th International Static Analysis Symposium
, 2000
"... This paper describes the precise specification, design, analysis, implementation, and measurements of an efficient algorithm for solving regular tree grammar based constraints. The particular constraints are for deadcode elimination on recursive data, but the method used for the algorithm design an ..."
Abstract

Cited by 5 (4 self)
 Add to MetaCart
This paper describes the precise specification, design, analysis, implementation, and measurements of an efficient algorithm for solving regular tree grammar based constraints. The particular constraints are for deadcode elimination on recursive data, but the method used for the algorithm design and complexity analysis is general and applies to other program analysis problems as well. The method is centered around Paige's finite differencing, i.e., computing expensive set expressions incrementally, and allows the algorithm to be derived and analyzed formally and implemented easily. We study higherlevel transformations that make the derived algorithm concise and allow its complexity to be analyzed accurately. Although a rough analysis shows that the worstcase time complexity is cubic in program size, an accurate analysis shows that it is linear in the number of live program points and in other parameters, including mainly the arity of data constructors and the number of selector applications into whose arguments the value constructed at a program point might flow. These parameters explain the performance of the analysis in practice. Our implementation also runs two to ten times as fast as a previous implementation of an informally designed algorithm.
Removing Redundant Arguments of Functions
 In 9th International Conference on Algebraic Methodology And Software Technology, AMAST 2002, H. Kirchner and C. Ringeissen, Eds. Lecture Notes in Computer Science
, 2002
"... The application of automatic transformation processes during the formal development and optimization of programs can introduce encumbrances in the generated code that programmers usually (or presumably) do not write. An example is the introduction of redundant arguments in the functions defined in t ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
The application of automatic transformation processes during the formal development and optimization of programs can introduce encumbrances in the generated code that programmers usually (or presumably) do not write. An example is the introduction of redundant arguments in the functions defined in the program. Redundancy of a parameter means that replacing it by any expression does not change the result. In this work, we provide a method for the analysis and elimination of redundant arguments in term rewriting systems as a model for the programs that can be written in more sophisticated languages.
Program Specialization Based on Dynamic Slicing
 In Proceedings of Workshop on Software Analysis and Development for Pervasive Systems (SONDA’04
, 2004
"... Within the imperative programming paradigm, program slicing has been widely used as a basis to solve many software engineering problems, like program understanding, debugging, testing, differencing, specialization, and merging. In this work, we present a lightweight approach to program slicing in la ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
Within the imperative programming paradigm, program slicing has been widely used as a basis to solve many software engineering problems, like program understanding, debugging, testing, differencing, specialization, and merging. In this work, we present a lightweight approach to program slicing in lazy functional logic languages and discuss its potential applications in the context of pervasive systems where resources are limited. In particular, we show how program slicing can be used to achieve a form of program specialization that cannot be achieved with other, related techniques like partial evaluation.
Dynamic Slicing of Lazy Functional Programs Based on Redex Trails ∗
"... Abstract. Tracing computations is a widely used methodology for program debugging. Lazy languages, however, pose new demands on tracing techniques because following the actual trace of a computation is generally useless. Typically, tracers for lazy languages rely on the construction of a redex trail ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Abstract. Tracing computations is a widely used methodology for program debugging. Lazy languages, however, pose new demands on tracing techniques because following the actual trace of a computation is generally useless. Typically, tracers for lazy languages rely on the construction of a redex trail, a graph that stores the reductions performed in a computation. While tracing provides a significant help for locating bugs, the task still remains complex. A wellknown debugging technique for imperative programs is based on dynamic slicing, a method for finding the program statements that influence the computation of a value for a specific program input. In this work, we introduce a novel technique for dynamic slicing in firstorder lazy functional languages. Rather than starting from scratch, our technique relies on (a slight extension of) redex trails. We provide a notion of dynamic slice and introduce a method to compute it from the redex trail of a computation. We also sketch the extension of our technique to deal with a functional logic language. A clear advantage of our proposal is that one can enhance existing tracers with slicing capabilities with a modest implementation effort, since the same data structure (the redex trail) can be used for both tracing and slicing.
Seventh International Conference on Formal Methods in ComputerAided Design Global Optimization of Compositional Systems
"... Abstract—Embedded systems typically consist of a composition of a set of hardware and software IP modules. Each module is heavily optimized by itself. However, when these modules are composed together, significant additional opportunities for optimizations are introduced because only a subset of the ..."
Abstract
 Add to MetaCart
Abstract—Embedded systems typically consist of a composition of a set of hardware and software IP modules. Each module is heavily optimized by itself. However, when these modules are composed together, significant additional opportunities for optimizations are introduced because only a subset of the entire functionality is actually used. We propose COSE—a technique to jointly optimize such designs. We use symbolic execution to compute invariants in each component of the design. We propagate these invariants as constraints to other modules using global flow analysis of the composition of the design. This captures optimizations that go beyond, and are qualitatively different than, those achievable by compiler optimization techniques such as common subexpression elimination, which are localized. We again employ static analysis techniques to perform optimizations subject to these constraints. We implemented COSE in the Metropolis platform and achieved significant optimizations using reasonable computational resources. I.