Results 1  10
of
136
Symbolic Analysis for Parallelizing Compilers
, 1994
"... Symbolic Domain The objects in our abstract symbolic domain are canonical symbolic expressions. A canonical symbolic expression is a lexicographically ordered sequence of symbolic terms. Each symbolic term is in turn a pair of an integer coefficient and a sequence of pairs of pointers to program va ..."
Abstract

Cited by 111 (4 self)
 Add to MetaCart
Symbolic Domain The objects in our abstract symbolic domain are canonical symbolic expressions. A canonical symbolic expression is a lexicographically ordered sequence of symbolic terms. Each symbolic term is in turn a pair of an integer coefficient and a sequence of pairs of pointers to program variables in the program symbol table and their exponents. The latter sequence is also lexicographically ordered. For example, the abstract value of the symbolic expression 2ij+3jk in an environment that i is bound to (1; (( " i ; 1))), j is bound to (1; (( " j ; 1))), and k is bound to (1; (( " k ; 1))) is ((2; (( " i ; 1); ( " j ; 1))); (3; (( " j ; 1); ( " k ; 1)))). In our framework, environment is the abstract analogous of state concept; an environment is a function from program variables to abstract symbolic values. Each environment e associates a canonical symbolic value e x for each variable x 2 V ; it is said that x is bound to e x. An environment might be represented by...
Parallel Execution of Prolog Programs: A Survey
"... Since the early days of logic programming, researchers in the field realized the potential for exploitation of parallelism present in the execution of logic programs. Their highlevel nature, the presence of nondeterminism, and their referential transparency, among other characteristics, make logic ..."
Abstract

Cited by 79 (25 self)
 Add to MetaCart
Since the early days of logic programming, researchers in the field realized the potential for exploitation of parallelism present in the execution of logic programs. Their highlevel nature, the presence of nondeterminism, and their referential transparency, among other characteristics, make logic programs interesting candidates for obtaining speedups through parallel execution. At the same time, the fact that the typical applications of logic programming frequently involve irregular computations, make heavy use of dynamic data structures with logical variables, and involve search and speculation, makes the techniques used in the corresponding parallelizing compilers and runtime systems potentially interesting even outside the field. The objective of this paper is to provide a comprehensive survey of the issues arising in parallel execution of logic programming languages along with the most relevant approaches explored to date in the field. Focus is mostly given to the challenges emerging from the parallel execution of Prolog programs. The paper describes the major techniques used for shared memory implementation of Orparallelism, Andparallelism, and combinations of the two. We also explore some related issues, such as memory
Cost analysis of java bytecode
 16th European Symposium on Programming, ESOP’07, Lecture Notes in Computer Science
, 2007
"... Abstract. Cost analysis of Java bytecode is complicated by its unstructured control flow, the use of an operand stack and its objectoriented programming features (like dynamic dispatching). This paper addresses these problems and develops a generic framework for the automatic cost analysis of sequ ..."
Abstract

Cited by 78 (35 self)
 Add to MetaCart
(Show Context)
Abstract. Cost analysis of Java bytecode is complicated by its unstructured control flow, the use of an operand stack and its objectoriented programming features (like dynamic dispatching). This paper addresses these problems and develops a generic framework for the automatic cost analysis of sequential Java bytecode. Our method generates cost relations which define at compiletime the cost of programs as a function of their input data size. To the best of our knowledge, this is the first approach to the automatic cost analysis of Java bytecode. 1
A Methodology for Granularity Based Control of Parallelism in Logic Programs
 Journal of Symbolic Computation, Special Issue on Parallel Symbolic Computation
, 1996
"... ..."
(Show Context)
Lower Bound Cost Estimation for Logic Programs
 In 1997 International Logic Programming Symposium
, 1997
"... It is generally recognized that information about the runtime cost of computations can be useful for a variety of applications, including program transformation, granularity control during parallel execution, and query optimization in deductive databases. Most of the work to date on compiletime ..."
Abstract

Cited by 61 (35 self)
 Add to MetaCart
It is generally recognized that information about the runtime cost of computations can be useful for a variety of applications, including program transformation, granularity control during parallel execution, and query optimization in deductive databases. Most of the work to date on compiletime cost estimation of logic programs has focused on the estimation of upper bounds on costs. However, in many applications, such as parallel implementations on distributedmemory machines, one would prefer to work with lower bounds instead. The problem with estimating lower bounds is that in general, it is necessary to account for the possibility of failure of head unification, leading to a trivial lower bound of 0. In this paper, we show how, given type and mode information about procedures in a logic program, it is possible to (semiautomatically) derive nontrivial lower bounds on their computational costs. We also discuss the cost analysis for the special and frequent case of divide...
Controlling generalisation and polyvariance in partial deduction of normal logic programs
, 1996
"... In this paper, we further elaborate global control for partial deduction: For which atoms, among possibly innitely many, should partial deductions be produced, meanwhile guaranteeing correctness as well as termination, and providing ample opportunities for negrained polyvariance? Our solution is b ..."
Abstract

Cited by 60 (40 self)
 Add to MetaCart
(Show Context)
In this paper, we further elaborate global control for partial deduction: For which atoms, among possibly innitely many, should partial deductions be produced, meanwhile guaranteeing correctness as well as termination, and providing ample opportunities for negrained polyvariance? Our solution is based on two ingredients. First, we use the wellknown concept of a characteristic tree to guide abstraction (or generalisation) and polyvariance, and aim for producing one specialised procedure per characteristic tree generated. Previous work along this line failed to provide abstraction correctly dealing with characteristic trees. We show how this can be rectied in an elegant way. Secondly, we structure combinations of atoms and associated characteristic trees in global trees registering \causal " relationships among such pairs. This will allow us to spot looming nontermination and consequently perform proper generalisation in order to avert the danger, without having to impose a depth bound on characteristic trees. Leaving unspecied the specic local control one may wish to plug in, the resulting global control strategy enables partial deduction that always terminates in an elegant, non ad hoc way, while providing excellent specialisation as well as negrained (but reasonable) polyvariance.
Inferring Argument Size Relationships with CLP(R)
, 1996
"... . Argument size relationships are useful in termination analysis which, in turn, is important in program synthesis and goalreplacement transformations. We show how a precise analysis for interargument size relationships, formulated in terms of abstract interpretation, can be implemented straightfo ..."
Abstract

Cited by 56 (11 self)
 Add to MetaCart
. Argument size relationships are useful in termination analysis which, in turn, is important in program synthesis and goalreplacement transformations. We show how a precise analysis for interargument size relationships, formulated in terms of abstract interpretation, can be implemented straightforwardly in a language with constraint support like CLP(R) or SICStus version 3. The analysis is based on polyhedral approximations and uses a simple relaxation technique to calculate least upper bounds and a delay method to improve the precision of widening. To the best of our knowledge, and despite its simplicity, the analysis derives relationships to an accuracy that is either comparable or better than any existing technique. 1 Introduction Termination analysis is important in program synthesis, goalreplacement transformations and is also likely to be useful in offline partial deduction. Termination analysis is usually necessary in synthesis since synthesis often only guarantees semanti...
FiniteTree Analysis for Constraint LogicBased Languages: The Complete Unabridged Version
, 2001
"... Logic languages based on the theory of rational, possibly infinite, trees have much appeal in that rational trees allow for faster unification (due to the safe omission of the occurscheck) and increased expressivity (cyclic terms can provide very efficient representations of grammars and other usef ..."
Abstract

Cited by 44 (16 self)
 Add to MetaCart
Logic languages based on the theory of rational, possibly infinite, trees have much appeal in that rational trees allow for faster unification (due to the safe omission of the occurscheck) and increased expressivity (cyclic terms can provide very efficient representations of grammars and other useful objects). Unfortunately, the use of infinite rational trees has problems. For instance, many of the builtin and library predicates are illdefined for such trees and need to be supplemented by runtime checks whose cost may be significant. Moreover, some widelyused program analysis and manipulation techniques are correct only for those parts of programs working over finite trees. It is thus important to obtain, automatically, a knowledge of the program variables (the finite variables) that, at the program points of interest, will always be bound to finite terms. For these reasons, we propose here a new dataflow analysis, based on abstract interpretation, that captures such information. We present a parametric domain where a simple component for recording finite variables is coupled, in the style of the open product construction of Cortesi et al., with a generic domain (the parameter of the construction) providing sharing information. The sharing domain is abstractly specified so as to guarantee the correctness of the combined domain and the generality of the approach. This finitetree analysis domain is further enhanced by coupling it with a domain of Boolean functions, called finitetree dependencies, that precisely captures how the finiteness of some variables influences the finiteness of other variables. We also summarize our experimental results showing how finitetree analysis, enhanced with finitetree dependencies, is a practical means of obtaining precise finitenes...
Automatic Inference of Upper Bounds for Recurrence Relations in Cost Analysis
 In SAS, LNCS
"... Abstract. The classical approach to automatic cost analysis consists of two phases. Given a program and some measure of cost, we first produce recurrence relations (RRs) which capture the cost of our program in terms of the size of its input data. Second, we convert such RRs into closed form (i.e., ..."
Abstract

Cited by 43 (12 self)
 Add to MetaCart
(Show Context)
Abstract. The classical approach to automatic cost analysis consists of two phases. Given a program and some measure of cost, we first produce recurrence relations (RRs) which capture the cost of our program in terms of the size of its input data. Second, we convert such RRs into closed form (i.e., without recurrences). Whereas the first phase has received considerable attention, with a number of cost analyses available for a variety of programming languages, the second phase has received comparatively little attention. In this paper we first study the features of RRs generated by automatic cost analysis and discuss why existing computer algebra systems are not appropriate for automatically obtaining closed form solutions nor upper bounds of them. Then we present, to our knowledge, the first practical framework for the fully automatic generation of reasonably accurate upper bounds of RRs originating from cost analysis of a wide range of programs. It is based on the inference of ranking functions and loop invariants and on partial evaluation. 1
Multivariate Amortized Resource Analysis
, 2010
"... We study the problem of automatically analyzing the worstcase resource usage of procedures with several arguments. Existing automatic analyses based on amortization, or sized types bound the resource usage or result size of such a procedure by a sum of unary functions of the sizes of the arguments. ..."
Abstract

Cited by 43 (5 self)
 Add to MetaCart
We study the problem of automatically analyzing the worstcase resource usage of procedures with several arguments. Existing automatic analyses based on amortization, or sized types bound the resource usage or result size of such a procedure by a sum of unary functions of the sizes of the arguments. In this paper we generalize this to arbitrary multivariate polynomial functions thus allowing bounds of the form mn which had to be grossly overestimated by m 2 + n 2 before. Our framework even encompasses bounds like ∑ i,j≤n mimj where the mi are the sizes of the entries of a list of length n. This allows us for the first time to derive useful resource bounds for operations on matrices that are represented as lists of lists and to considerably improve bounds on other superlinear operations on lists such as longest common subsequence and removal of duplicates from lists of lists. Furthermore, resource bounds are now closed under composition which improves accuracy of the analysis of composed programs when some or all of the components exhibit superlinear resource or size behavior. The analysis is based on a novel multivariate amortized resource analysis. We present it in form of a type system for a simple firstorder functional language with lists and trees, prove soundness, and describe automatic type inference based on linear programming. We have experimentally validated the automatic analysis on a wide range of examples from functional programming with lists and trees. The obtained bounds were compared with actual resource consumption. All bounds were asymptotically tight, and the constants were close or even identical to the optimal ones.