Results 1  10
of
15
Program Analysis and Specialization for the C Programming Language
, 1994
"... Software engineers are faced with a dilemma. They want to write general and wellstructured programs that are flexible and easy to maintain. On the other hand, generality has a price: efficiency. A specialized program solving a particular problem is often significantly faster than a general program. ..."
Abstract

Cited by 527 (0 self)
 Add to MetaCart
Software engineers are faced with a dilemma. They want to write general and wellstructured programs that are flexible and easy to maintain. On the other hand, generality has a price: efficiency. A specialized program solving a particular problem is often significantly faster than a general program. However, the development of specialized software is timeconsuming, and is likely to exceed the production of today’s programmers. New techniques are required to solve this socalled software crisis. Partial evaluation is a program specialization technique that reconciles the benefits of generality with efficiency. This thesis presents an automatic partial evaluator for the Ansi C programming language. The content of this thesis is analysis and transformation of C programs. We develop several analyses that support the transformation of a program into its generating extension. A generating extension is a program that produces specialized programs when executed on parts of the input. The thesis contains the following main results.
The TXL Source Transformation Language
, 2005
"... TXL is a specialpurpose programming language designed for creating, manipulating and rapidly prototyping language descriptions, tools and applications. TXL is designed to allow explicit programmer control over the interpretation, application, order and backtracking of both parsing and rewriting rul ..."
Abstract

Cited by 70 (31 self)
 Add to MetaCart
TXL is a specialpurpose programming language designed for creating, manipulating and rapidly prototyping language descriptions, tools and applications. TXL is designed to allow explicit programmer control over the interpretation, application, order and backtracking of both parsing and rewriting rules. Using first order functional programming at the higher level and term rewriting at the lower level, TXL provides for flexible programming of traversals, guards, scope of application and parameterized context. This flexibility has allowed TXL users to express and experiment with both new ideas in parsing, such as robust, island and agile parsing, and new paradigms in rewriting, such as XML markup, rewriting strategies and contextualized rules, without any change to TXL itself. This paper outlines the history, evolution and concepts of TXL with emphasis on its distinctive style and philosophy, and gives examples of its use in expressing and applying recent new paradigms in language processing.
A Geometric Constraint Solver
, 1995
"... We report on the development of a twodimensional geometric constraint solver. The solver is a major component of a new generation of CAD systems that we are developing based on a highlevel geometry representation. The solver uses a graphreduction directed algebraic approach, and achieves interact ..."
Abstract

Cited by 60 (9 self)
 Add to MetaCart
We report on the development of a twodimensional geometric constraint solver. The solver is a major component of a new generation of CAD systems that we are developing based on a highlevel geometry representation. The solver uses a graphreduction directed algebraic approach, and achieves interactive speed. We describe the architecture of the solver and its basic capabilities. Then, we discuss in detail how to extend the scope of the solver, with special emphasis placed on the theoretical and human factors involved in finding a solution  in an exponentially large search space  so that the solution is appropriate to the application and the way of finding it is intuitive to an untrained user. 1 Introduction Solving a system of geometric constraints is a problem that has been considered by several communities, and using different approaches. For example, the symbolic computation community has considered the general problem, in the Supported in part by ONR contract N0001490J...
Viewing A Program Transformation System At Work
 Joint 6th International Conference on Programming Language Implementation and Logic Programming (PLILP) and 4th International conference on Algebraic and Logic Programming (ALP), volume 844 of Lecture Notes in Computer Science
, 1994
"... How to decrease labor and improve reliability in the development of efficient implementations of nonnumerical algorithms and labor intensive software is an increasingly important problem as the demand for computer technology shifts from easier applications to more complex algorithmic ones; e.g., opt ..."
Abstract

Cited by 27 (2 self)
 Add to MetaCart
How to decrease labor and improve reliability in the development of efficient implementations of nonnumerical algorithms and labor intensive software is an increasingly important problem as the demand for computer technology shifts from easier applications to more complex algorithmic ones; e.g., optimizing compilers for supercomputers, intricate data structures to implement efficient solutions to operations research problems, search and analysis algorithms in genetic engineering, complex software tools for workstations, design automation, etc. It is also a difficult problem that is not solved by current CASE tools and software management disciplines, which are oriented towards data processing and other applications, where the implementation and a prediction of its resource utilization follow more directly from the specification. Recently, Cai and Paige reported experiments suggesting a way to implement nonnumerical algorithms in C at a programming rate (i.e., source lines per second) t...
Universal regular path queries
 HigherOrder and Symbolic Computation
, 2003
"... Given are a directed edgelabelled graph G with a distinguished node n0, and a regular expression P which may contain variables. We wish to compute all substitutions φ (of symbols for variables), together with all nodes n such that all paths n0 → n are in φ(P). We derive an algorithm for this proble ..."
Abstract

Cited by 12 (1 self)
 Add to MetaCart
Given are a directed edgelabelled graph G with a distinguished node n0, and a regular expression P which may contain variables. We wish to compute all substitutions φ (of symbols for variables), together with all nodes n such that all paths n0 → n are in φ(P). We derive an algorithm for this problem using relational algebra, and show how it may be implemented in Prolog. The motivation for the problem derives from a declarative framework for specifying compiler optimisations. 1 Bob Paige and IFIP WG 2.1 Bob Paige was a longstanding member of IFIP Working Group 2.1 on Algorithmic Languages and Calculi. In recent years, the main aim of this group has been to investigate the derivation of algorithms from specifications by program transformation. Already in the mideighties, Bob was way ahead of the pack: instead of applying transformational techniques to wellworn examples, he was applying his theories of program transformation to new problems, and discovering new algorithms [16, 48, 52]. The secret of his success lay partly in his insistence on the study of general algorithm design strategies (in particular
Future Directions In Program Transformations
, 1997
"... This paper briefly surveys what transformational programming is about, and how to make progress in the field. A program transformation is a meaningpreserving mapping defined on a programming language. Transformational programming is a program development methodology in which an implementation I is ..."
Abstract

Cited by 11 (0 self)
 Add to MetaCart
This paper briefly surveys what transformational programming is about, and how to make progress in the field. A program transformation is a meaningpreserving mapping defined on a programming language. Transformational programming is a program development methodology in which an implementation I is obtained from a specification S by applying a sequence T k :::T 1 of transformations to S. If S and each transformation T i applied to successive implementations T i\Gamma1 :::T 1 S of S are proved correct for
High Level Reading and Data Structure Compilation
 In Conference Record of the 24th Annual ACM Symposium on Principles of Programming Languages
, 1997
"... In [Paige89], it was shown how to simulate a set machine in realtime on a RAM that provides cursor or even only pointer access to data. The underlying assumption was that the establishment of efficient data structures would be provided by some `client' program. In the current paper, we fill in the ..."
Abstract

Cited by 9 (2 self)
 Add to MetaCart
In [Paige89], it was shown how to simulate a set machine in realtime on a RAM that provides cursor or even only pointer access to data. The underlying assumption was that the establishment of efficient data structures would be provided by some `client' program. In the current paper, we fill in the gap by presenting a lineartime high level reading method that translates external input in string form into data structures that facilitate such realtime simulation. The algorithm builds the data structures for a much richer type system than what appeared in [Paige94], and is powerful enough to support our reading algorithm itself in an efficient way. Consequently, it equips a high level settheoretic language with I/O, without the loss of computational transparency. This work alleviates the burden of creating lowlevel data structures manually, and builds the internal pointerbased inputs required by most pointer algorithms [BenAmram95 ] from the external string representation. 1 Introdu...
A New Solution to the Hidden Copy Problem
 Proc. 5th International Static Analysis Symposium, number 1503 in LNCS
, 1998
"... . We consider the wellknown problem of avoiding unnecessary costly copying that arises in languages with copy/value semantics and large aggregate structures such as arrays, sets, or files. The origins of many recent studies focusing on avoiding copies of flat arrays in functional languages may ..."
Abstract

Cited by 8 (3 self)
 Add to MetaCart
. We consider the wellknown problem of avoiding unnecessary costly copying that arises in languages with copy/value semantics and large aggregate structures such as arrays, sets, or files. The origins of many recent studies focusing on avoiding copies of flat arrays in functional languages may be traced back to SETL copy optimization [Schwartz 75]. The problem is hard, and progress is slow, but a successful solution is crucial to achieving a pointerfree style of programming envisioned by [Hoare 75]. We give a new solution to copy optimization that uses dynamic reference counts and lazy copying to implement updates efficiently in an imperative language with arbitrarily nested finite sets and maps (which can easily model arrays, records and other aggregate datatypes). Big step operational semantics and abstract interpretations are used to prove the soundness of the analysis and the correctness of the transformation. An efficient algorithm to implement the analysis is pres...
Customization of Java library classes using type constraints and profile information
 In Proceedings of the European Conference on ObjectOriented Programming (ECOOP
"... Abstract. The use of class libraries increases programmer productivity by allowing programmers to focus on the functionality unique to their application. However, library classes are generally designed with some typical usage pattern in mind, and performance may be suboptimal if the actual usage dif ..."
Abstract

Cited by 8 (3 self)
 Add to MetaCart
Abstract. The use of class libraries increases programmer productivity by allowing programmers to focus on the functionality unique to their application. However, library classes are generally designed with some typical usage pattern in mind, and performance may be suboptimal if the actual usage differs. We present an approach for rewriting applications to use customized versions of library classes that are generated using a combination of static analysis and profile information. Type constraints are used to determine where customized classes may be used, and profile information is used to determine where customization is likely to be profitable. We applied this approach to a number of Java applications by customizing various standard container classes and the omnipresent StringBuffer class, and measured speedups up to 78 % and memory footprint reductions up to 46%. The increase in application size due to the added custom classes is limited to 12 % for all but the smallest programs. 1