Results 11  20
of
519
Symbolic Analysis for Parallelizing Compilers
, 1994
"... Symbolic Domain The objects in our abstract symbolic domain are canonical symbolic expressions. A canonical symbolic expression is a lexicographically ordered sequence of symbolic terms. Each symbolic term is in turn a pair of an integer coefficient and a sequence of pairs of pointers to program va ..."
Abstract

Cited by 111 (4 self)
 Add to MetaCart
Symbolic Domain The objects in our abstract symbolic domain are canonical symbolic expressions. A canonical symbolic expression is a lexicographically ordered sequence of symbolic terms. Each symbolic term is in turn a pair of an integer coefficient and a sequence of pairs of pointers to program variables in the program symbol table and their exponents. The latter sequence is also lexicographically ordered. For example, the abstract value of the symbolic expression 2ij+3jk in an environment that i is bound to (1; (( " i ; 1))), j is bound to (1; (( " j ; 1))), and k is bound to (1; (( " k ; 1))) is ((2; (( " i ; 1); ( " j ; 1))); (3; (( " j ; 1); ( " k ; 1)))). In our framework, environment is the abstract analogous of state concept; an environment is a function from program variables to abstract symbolic values. Each environment e associates a canonical symbolic value e x for each variable x 2 V ; it is said that x is bound to e x. An environment might be represented by...
Nonlinear Array Dependence Analysis
, 1991
"... Standard array data dependence techniques can only reason about linear constraints. There has also been work on analyzing some dependences involving polynomial constraints. Analyzing array data dependences in realworld programs requires handling many "unanalyzable" terms: subscript arrays ..."
Abstract

Cited by 93 (6 self)
 Add to MetaCart
(Show Context)
Standard array data dependence techniques can only reason about linear constraints. There has also been work on analyzing some dependences involving polynomial constraints. Analyzing array data dependences in realworld programs requires handling many "unanalyzable" terms: subscript arrays, runtime tests, function calls. The standard approach to analyzing such programs has been to omit and ignore any constraints that cannot be reasoned about. This is unsound when reasoning about valuebased dependences and whether privatization is legal. Also, this prevents us from determining the conditions that must be true to disprove the dependence. These conditions could be checked by a runtime test or verified by a programmer or aggressive, demanddriven interprocedural analysis. We describe a solution to these problems. Our solution makes our system sound and more accurate for analyzing valuebased dependences and derives conditions that can be used to disprove dependences. We also give some p...
Counting Solutions to Presburger Formulas: How and Why
, 1994
"... We describe methods that are able to count the number of integer solutions to selected free variables of a Presburger formula, or sum a polynomial over all integer solutions of selected free variables of a Presburger formula. This answer is given symbolically, in terms of symbolic constants (the rem ..."
Abstract

Cited by 89 (2 self)
 Add to MetaCart
(Show Context)
We describe methods that are able to count the number of integer solutions to selected free variables of a Presburger formula, or sum a polynomial over all integer solutions of selected free variables of a Presburger formula. This answer is given symbolically, in terms of symbolic constants (the remaining free variables in the Presburger formula). For example...
Introduction to set constraintbased program analysis
 Science of Computer Programming
, 1999
"... ..."
An Exact Method for Analysis of Valuebased Array Data Dependences
 In Sixth Annual Workshop on Programming Languages and Compilers for Parallel Computing
, 1993
"... Standard array data dependence testing algorithms give information about the aliasing of array references. If statement 1 writes a[5], and statement 2 later reads a[5], standard techniques described this as a flow dependence, even if there was an intervening write. We call a dependence between two ..."
Abstract

Cited by 88 (14 self)
 Add to MetaCart
(Show Context)
Standard array data dependence testing algorithms give information about the aliasing of array references. If statement 1 writes a[5], and statement 2 later reads a[5], standard techniques described this as a flow dependence, even if there was an intervening write. We call a dependence between two references to the same memory location a memorybased dependence. In contrast, if there are no intervening writes, the references touch the same value and we call the dependence a valuebased dependence. There has been a surge of recent work on valuebased array data dependence analysis (also referred to as computation of array dataflow dependence information). In this paper, we describe a technique that is exact over programs without control flow (other than loops) and nonlinear references. We compare our proposal with the technique proposed by Paul Feautrier, which is the other technique that is complete over the same domain as ours. We also compare our work with that of Tu and Padua, a ...
Precise Miss Analysis for Program Transformations with Caches of Arbitrary Associativity
 In Proceedings of the Eighth International Conference on Architectural Support for Programming Languages and Operating Systems
, 1998
"... Analyzing and optimizing program memory performance is a pressing problem in highperformance computer architectures. Currently, software solutions addressing the processormemory performance gap include compiler or programmerapplied optimizations like data structure padding, matrix blocking, and ot ..."
Abstract

Cited by 88 (1 self)
 Add to MetaCart
(Show Context)
Analyzing and optimizing program memory performance is a pressing problem in highperformance computer architectures. Currently, software solutions addressing the processormemory performance gap include compiler or programmerapplied optimizations like data structure padding, matrix blocking, and other program transformations. Compiler optimization can be effective, but the lack of precise analysis and optimization frameworks makes it impossible to confidently make optimal, rather than heuristicbased, program transformations. Imprecision is most problematic in situations where hardtopredict cache conflicts foil heuristic approaches. Furthermore, the lack of a general framework for compiler memory performance analysis makes it impossible to understand the combined effects of several program transformations. The Cache Miss Equation (CME) framework discussed in this paper addresses these issues. We express memory reference and cache conflict behavior in terms of sets of equations. The ...
Semiautomatic composition of loop transformations for deep parallelism and memory hierarchies
 Intl J. of Parallel Programming
, 2006
"... Modern compilers are responsible for translating the idealistic operational semantics of the source program into a form that makes efficient use of a highly complex heterogeneous machine. Since optimization problems are associated with huge and unstructured search spaces, this combinational task is ..."
Abstract

Cited by 82 (26 self)
 Add to MetaCart
(Show Context)
Modern compilers are responsible for translating the idealistic operational semantics of the source program into a form that makes efficient use of a highly complex heterogeneous machine. Since optimization problems are associated with huge and unstructured search spaces, this combinational task is poorly achieved in general, resulting in weak scalability and disappointing sustained performance. We address this challenge by working on the program representation itself, using a semiautomatic optimization approach to demonstrate that current compilers offen suffer from unnecessary constraints and intricacies that can be avoided in a semantically richer transformation framework. Technically, the purpose of this paper is threefold: (1) to show that syntactic code representations close to the operational semantics lead to rigid phase ordering and cumbersome expression of architectureaware loop transformations, (2) to illustrate how complex transformation sequences may be needed to achieve significant performance benefits, (3) to facilitate the automatic search for program transformation sequences, improving on classical polyhedral representations to better support operation research strategies in a simpler, structured search space. The proposed framework relies on a unified polyhedral representation of loops and statements, using normalization rules to allow flexible and expressive transformation sequencing. This representation allows to extend the scalability of polyhedral dependence analysis, and to delay the (automatic) legality checks until the end of a transformation sequence. Our work leverages on algorithmic advances in polyhedral code generation and has been implemented in a modern research compiler.
Code Generation for Multiple Mappings
 IN FRONTIERS '95: THE 5TH SYMPOSIUM ON THE FRONTIERS OF MASSIVELY PARALLEL COMPUTATION
, 1994
"... There has been a great amount of recent work toward unifying iteration reordering transformations. Many of these approaches represent transformations as affine mappings from the original iteration space to a new iteration space. These approaches show a great deal of promise, but they all rely on the ..."
Abstract

Cited by 81 (2 self)
 Add to MetaCart
(Show Context)
There has been a great amount of recent work toward unifying iteration reordering transformations. Many of these approaches represent transformations as affine mappings from the original iteration space to a new iteration space. These approaches show a great deal of promise, but they all rely on the ability to generate code that iterates over the points in these new iteration spaces in the appropriate order. This problem has been fairly wellstudied in the case where all statements use the same mapping. We have developed an algorithm for the less wellstudied case where each statement uses a potentially different mapping. Unlike many other approaches, our algorithm can also generate code from mappings corresponding to loop blocking. We address the important tradeoff between reducing control overhead and duplicating code.
ARCHER: Using Symbolic, Pathsensitive Analysis to Detect Memory Access Errors
 SIGSOFT Softw. Eng. Notes
, 2003
"... Memory corruption errors lead to nondeterministic, elusive crashes. This paper describes ARCHER (ARray CHeckER) a static, e#ective memory access checker. ARCHER uses pathsensitive, interprocedural symbolic analysis to bound the values of both variables and memory sizes. It evaluates known values u ..."
Abstract

Cited by 80 (0 self)
 Add to MetaCart
(Show Context)
Memory corruption errors lead to nondeterministic, elusive crashes. This paper describes ARCHER (ARray CHeckER) a static, e#ective memory access checker. ARCHER uses pathsensitive, interprocedural symbolic analysis to bound the values of both variables and memory sizes. It evaluates known values using a constraint solver at every array access, pointer dereference, or call to a function that expects a size parameter. Accesses that violate constraints are flagged as errors. Those that are exploitable by malicious attackers are marked as security holes.
The WorstCase Execution Time Problem – Overview of Methods and Survey of Tools
 ACM Transactions on Embedded Computing Systems
, 2008
"... ATRs (AVACS Technical Reports) are freely downloadable from www.avacs.org Copyright c © April 2007 by the author(s) ..."
Abstract

Cited by 76 (16 self)
 Add to MetaCart
(Show Context)
ATRs (AVACS Technical Reports) are freely downloadable from www.avacs.org Copyright c © April 2007 by the author(s)