Results 11  20
of
29
An algorithm for deciding BAPA: Boolean Algebra with Presburger Arithmetic
 In 20th International Conference on Automated Deduction, CADE20
, 2005
"... Abstract. We describe an algorithm for deciding the firstorder multisorted theory BAPA, which combines 1) Boolean algebras of sets of uninterpreted elements (BA) and 2) Presburger arithmetic operations (PA). BAPA can express the relationship between integer variables and cardinalities of a priory u ..."
Abstract

Cited by 26 (13 self)
 Add to MetaCart
Abstract. We describe an algorithm for deciding the firstorder multisorted theory BAPA, which combines 1) Boolean algebras of sets of uninterpreted elements (BA) and 2) Presburger arithmetic operations (PA). BAPA can express the relationship between integer variables and cardinalities of a priory unbounded finite sets, and supports arbitrary quantification over sets and integers. Our motivation for BAPA is deciding verification conditions that arise in the static analysis of data structure consistency properties. Data structures often use an integer variable to keep track of the number of elements they store; an invariant of such a data structure is that the value of the integer variable is equal to the number of elements stored in the data structure. When the data structure content is represented by a set, the resulting constraints can be captured in BAPA. BAPA formulas with quantifier alternations arise when verifying programs with annotations containing quantifiers, or when proving simulation relation conditions for refinement and equivalence of program fragments. Furthermore, BAPA constraints can be used for proving the termination of programs that manipulate data structures, and have applications in constraint databases. We give a formal description of a decision procedure for BAPA, which implies the decidability of BAPA. We analyze our algorithm and obtain an elementary upper bound on the running time, thereby giving the first complexity bound for BAPA. Because it works by a reduction to PA, our algorithm yields the decidability of a combination of sets of uninterpreted elements with any decidable extension of PA. Our algorithm can also be used to yield an optimal decision procedure for BA through a reduction to PA with bounded quantifiers. We have implemented our algorithm and used it to discharge verification conditions in the Jahob system for data structure consistency checking of Java programs; our experience with the algorithm is promising. 1
Experience with the SETL optimizer
 ACM Transactions on Programming Languages and Systems
, 1983
"... The structure of an existing optimizer for the very highlevel, set theoretically oriented programming language SETL is described, and its capabilities are illustrated. The use of novel techniques (supported by stateoftheart interprocedural program analysis methods) enables the optimizer to accom ..."
Abstract

Cited by 24 (0 self)
 Add to MetaCart
The structure of an existing optimizer for the very highlevel, set theoretically oriented programming language SETL is described, and its capabilities are illustrated. The use of novel techniques (supported by stateoftheart interprocedural program analysis methods) enables the optimizer to accomplish various sophisticated optimizations, the most significant of which are the automatic selection of data representations and the systematic elimination of superfluous copying operations. These techniques allow quite sophisticated datastructure choices to be made automatically. Categories and Subject Descriptors: D.3.2 [Programmiug Languages]: Language Classificationsvery highlevel languages; SETL; D.3.4 [Programming Languages]: ProcessorscompUers; optimization; 1.2.2 [Artificial Intelligence]: Automatic Programmingautomatic analysis of algorithms; program modification; program transformation
Efficient Translation of External Input in a Dynamically Typed Language
 Technology and Foundations: 13th World Computer Congress 94, IFIP Transactions A51
, 1994
"... the related reading problem in SETL [3, 6]. Those algorithms used hashing even for deeply nested data to detect duplicate values. If we assume that hashing unitspace data takes unit expected time and linear worst case time, then for arbitrary data their algorithm would require linear expected time ..."
Abstract

Cited by 16 (4 self)
 Add to MetaCart
the related reading problem in SETL [3, 6]. Those algorithms used hashing even for deeply nested data to detect duplicate values. If we assume that hashing unitspace data takes unit expected time and linear worst case time, then for arbitrary data their algorithm would require linear expected time and quadratic worst case time in the number of symbols in C. Keyword Codes: F.3.2, I.1.1, I.1.2 Keywords: Semantics of Programming Languages; Expressions and Their Representation; Algorithms; 1. The Reading Problem Consider an external read operation read v that inputs external data into program variable v. Our framework for understanding what this operation means depends in part on ascribing to v a type t, which represents a set of abstract values val (t). This framework also includes two maps. The first map front_end t
Chameleon: Adaptive Selection of Collections
"... Languages such as Java and C#, as well as scripting languages like Python, and Ruby, make extensive use of Collection classes. A collection implementation represents a fixed choice in the dimensions of operation time, space utilization, and synchronization. Using the collection in a manner not consi ..."
Abstract

Cited by 12 (1 self)
 Add to MetaCart
Languages such as Java and C#, as well as scripting languages like Python, and Ruby, make extensive use of Collection classes. A collection implementation represents a fixed choice in the dimensions of operation time, space utilization, and synchronization. Using the collection in a manner not consistent with this fixed choice can cause significant performance degradation. In this paper, we present CHAMELEON, a lowoverhead automatic tool that assists the programmer in choosing the appropriate collection implementation for her application. During program execution, CHAMELEON computes elaborate trace and heapbased metrics on collection behavior. These metrics are consumed onthefly by a rules engine which outputs a list of suggested collection adaptation strategies. The tool can apply these corrective strategies automatically or present them to the programmer. We have implemented CHAMELEON on top of a IBMâ€™s J9 production JVM, and evaluated it over a small set of benchmarks. We show that for some applications, using CHAMELEON leads to a significant improvement of the memory footprint of the application.
Assessing test data adequacy through program inference
 ACM Transactions on Programming Languages and Systems
, 1983
"... Despite the almost universal reliance on testing as the means of locating software errors and its long history of use, few criteria have been proposed for deciding when software has been thoroughly tested. As a basis for the development of usable notions of test data adequacy, an abstract definition ..."
Abstract

Cited by 12 (0 self)
 Add to MetaCart
Despite the almost universal reliance on testing as the means of locating software errors and its long history of use, few criteria have been proposed for deciding when software has been thoroughly tested. As a basis for the development of usable notions of test data adequacy, an abstract definition is proposed and examined, and approximations to this definition are considered.
Data representation synthesis
 In PLDI
, 2011
"... We consider the problem of specifying combinations of data structures with complex sharing in a manner that is both declarative and results in provably correct code. In our approach, abstract data types are specified using relational algebra and functional dependencies. We describe a language of dec ..."
Abstract

Cited by 9 (4 self)
 Add to MetaCart
We consider the problem of specifying combinations of data structures with complex sharing in a manner that is both declarative and results in provably correct code. In our approach, abstract data types are specified using relational algebra and functional dependencies. We describe a language of decompositions that permit the user to specify different concrete representations for relations, and show that operations on concrete representations soundly implement their relational specification. It is easy to incorporate data representations synthesized by our compiler into existing systems, leading to code that is simpler, correct by construction, and comparable in performance to the code it replaces.
Customization of Java library classes using type constraints and profile information
 In Proceedings of the European Conference on ObjectOriented Programming (ECOOP
"... Abstract. The use of class libraries increases programmer productivity by allowing programmers to focus on the functionality unique to their application. However, library classes are generally designed with some typical usage pattern in mind, and performance may be suboptimal if the actual usage dif ..."
Abstract

Cited by 8 (3 self)
 Add to MetaCart
Abstract. The use of class libraries increases programmer productivity by allowing programmers to focus on the functionality unique to their application. However, library classes are generally designed with some typical usage pattern in mind, and performance may be suboptimal if the actual usage differs. We present an approach for rewriting applications to use customized versions of library classes that are generated using a combination of static analysis and profile information. Type constraints are used to determine where customized classes may be used, and profile information is used to determine where customization is likely to be profitable. We applied this approach to a number of Java applications by customizing various standard container classes and the omnipresent StringBuffer class, and measured speedups up to 78 % and memory footprint reductions up to 46%. The increase in application size due to the added custom classes is limited to 12 % for all but the smallest programs. 1
Concurrent Data Representation Synthesis
"... We describe an approach for synthesizing data representations for concurrent programs. Our compiler takes as input a program written using concurrent relations and synthesizes a representation of the relations as sets of cooperating data structures as well as the placement and acquisition of locks t ..."
Abstract

Cited by 5 (2 self)
 Add to MetaCart
We describe an approach for synthesizing data representations for concurrent programs. Our compiler takes as input a program written using concurrent relations and synthesizes a representation of the relations as sets of cooperating data structures as well as the placement and acquisition of locks to synchronize concurrent access to those data structures. The resulting code is correct by construction: individual relational operations are implemented correctly and the aggregate set of operations is serializable and deadlock free. The relational specification also permits a highlevel optimizer to choose the best performing of many possible legal data representations and locking strategies, which we demonstrate with an experiment autotuning a graph benchmark. Categories and Subject Descriptors D.3.3 [Programming Languages]: Language Constructs and Featuresâ€”Abstract data types,
Collections, Cardinalities, and Relations
"... Abstract. Logics that involve collections (sets, multisets), and cardinality constraints are useful for reasoning about unbounded data structures and concurrent processes. To make such logics more useful in verification this paper extends them with the ability to compute direct and inverse relation ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
Abstract. Logics that involve collections (sets, multisets), and cardinality constraints are useful for reasoning about unbounded data structures and concurrent processes. To make such logics more useful in verification this paper extends them with the ability to compute direct and inverse relation and function images. We establish decidability and complexity bounds for the extended logics. 1
Sets with Cardinality Constraints in Satisfiability Modulo Theories
"... Abstract. Boolean Algebra with Presburger Arithmetic (BAPA) is a decidable logic that can express constraints on sets of elements and their cardinalities. Problems from verification of complex properties of software often contain fragments that belong to quantifierfree BAPA (QFBAPA). In contrast to ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
Abstract. Boolean Algebra with Presburger Arithmetic (BAPA) is a decidable logic that can express constraints on sets of elements and their cardinalities. Problems from verification of complex properties of software often contain fragments that belong to quantifierfree BAPA (QFBAPA). In contrast to many other NPcomplete problems (such as quantifierfree firstorder logic or linear arithmetic), the applications of QFBAPA to a broader set of problems has so far been hindered by the lack of an efficient implementation that can be used alongside other efficient decision procedures. We overcome these limitations by extending the efficient SMT solver Z3 with the ability to reason about cardinality (QFBAPA) constraints. Our implementation uses the DPLL(T) mechanism of Z3 to reason about the toplevel propositional structure of a QFBAPA formula, improving the efficiency compared to previous implementations. Moreover, we present a new algorithm for automatically decomposing QFBAPA formulas. Our algorithm alleviates the exponential explosion of considering all Venn regions, significantly improving the tractability of formulas with many set variables. Because it is implemented as a theory plugin, our implementation enables Z3 to prove formulas that use QFBAPA constructs with constructs from other theories that Z3 supports, as well as with quantifiers. We have applied our implementation to the verification of functional programs; we show it can automatically prove formulas that no automated approach was reported to be able to prove before. 1