Results 1  10
of
12
Effective static deadlock detection
 In 31st International Conference on Software Engineering (ICSE’09). IEEE
, 2009
"... We present an effective static deadlock detection algorithm for Java. Our algorithm uses a novel combination of static analyses each of which approximates a different necessary condition for a deadlock. We have implemented the algorithm and report upon our experience applying it to a suite of multi ..."
Abstract

Cited by 16 (2 self)
 Add to MetaCart
We present an effective static deadlock detection algorithm for Java. Our algorithm uses a novel combination of static analyses each of which approximates a different necessary condition for a deadlock. We have implemented the algorithm and report upon our experience applying it to a suite of multithreaded Java programs. While neither sound nor complete, our approach is effective in practice, finding all known deadlocks as well as discovering previously unknown ones in our benchmarks with few false alarms. 1
Geometric Encoding: Forging the High Performance Context Sensitive Pointsto Analysis for Java
"... Context sensitive pointsto analysis suffers from the scalability problem. We present the geometric encoding to capture the redundancy in the pointsto analysis. Compared to BDD and EPA, the state of the art, the geometric encoding is much more efficient in processing the encoded facts, especially f ..."
Abstract

Cited by 6 (1 self)
 Add to MetaCart
Context sensitive pointsto analysis suffers from the scalability problem. We present the geometric encoding to capture the redundancy in the pointsto analysis. Compared to BDD and EPA, the state of the art, the geometric encoding is much more efficient in processing the encoded facts, especially for the highorder context sensitivity with the heap cloning. We also developed two precision preserving techniques, constraints distillation and 1CFA SCC modeling, to further improve the efficiency, in addition to the precision performance tradeoff scheme. We evaluate our pointsto algorithm with two variants of the geometric encoding, Geom and HeapIns, on 15 widely cited Java benchmarks. The evaluation shows that the Geom based algorithm is 11x and 68x faster than the worklist/BDD based 1objectsensitive analysis in Paddle, and the speedup steeply goes up to 24x and 111x, if the HeapIns algorithm is used. Meanwhile, being very efficient in time, the precision is still equal to and sometime better than the 1objectsensitive analysis.
Learning Minimal Abstractions
"... Static analyses are generally parametrized by an abstraction which is chosen from a family of abstractions. We are interested in flexible families of abstractions with many parameters, as these families can allow one to increase precision in ways tailored to the client without sacrificing scalabilit ..."
Abstract

Cited by 5 (2 self)
 Add to MetaCart
Static analyses are generally parametrized by an abstraction which is chosen from a family of abstractions. We are interested in flexible families of abstractions with many parameters, as these families can allow one to increase precision in ways tailored to the client without sacrificing scalability. For example, we consider klimited pointsto analyses where each call site and allocation site in a program can have a different k value. We then ask a natural question in this paper: What is the minimal (coarsest) abstraction in a given family which is able to prove a set of client queries? In addressing this question, we make the following two contributions: (i) we introduce two machine learning algorithms for efficiently finding a minimal abstraction; and (ii) for a static race detector backed by a klimited pointsto analysis, we show empirically that minimal abstractions are actually quite coarse: it suffices to provide context/object sensitivity to a very small fraction (0.4–2.3%) of the sites to yield equally precise results as providing context/object sensitivity uniformly to all sites.
Scaling Abstraction Refinement via Pruning
"... Many static analyses do not scale as they are made more precise. For example, increasing the amount of context sensitivity in a klimited pointer analysis causes the number of contexts to grow exponentially with k. Iterative refinement techniques can mitigate this growth by starting with a coarse ab ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
Many static analyses do not scale as they are made more precise. For example, increasing the amount of context sensitivity in a klimited pointer analysis causes the number of contexts to grow exponentially with k. Iterative refinement techniques can mitigate this growth by starting with a coarse abstraction and only refining parts of the abstraction that are deemed relevant with respect to a given client. In this paper, we introduce a new technique called pruning that uses client feedback in a different way. The basic idea is to use coarse abstractions to prune away parts of the program analysis deemed irrelevant for proving a client query, and then using finer abstractions on the sliced program analysis. For a klimited pointer analysis, this approach amounts to adaptively refining and pruning a set of prefix patterns representing the contexts relevant for the client. By pruning, we are able to scale up to much more expensive abstractions than before. We also prove that the pruned analysis is both sound and complete, that is, it yields the same results as an analysis that uses a more expensive abstraction directly without pruning.
Sound Predictive Race Detection in Polynomial Time
"... Data races are among the most reliable indicators of programming errors in concurrent software. For at least two decades, Lamport’s happensbefore (HB) relation has served as the standard test for detecting races—other techniques, such as locksetbased approaches, fail to be sound, as they may false ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
Data races are among the most reliable indicators of programming errors in concurrent software. For at least two decades, Lamport’s happensbefore (HB) relation has served as the standard test for detecting races—other techniques, such as locksetbased approaches, fail to be sound, as they may falsely warn of races. This work introduces a new relation, causallyprecedes (CP), which generalizes happensbefore to observe more races without sacrificing soundness. Intuitively, CP tries to capture the concept of happensbefore ordered events that must occur in the observed order for the program to observe the same values. What distinguishes CP from past predictive race detection approaches (which also generalize an observed execution to detect races in other plausible executions) is that CPbased race detection is both sound and of polynomial complexity. We demonstrate that the unique aspects of CP result in practical benefit. Applying CP to realworld programs, we successfully analyze serverlevel applications (e.g., Apache FtpServer) and show that traces longer than in past predictive race analyses can be analyzed in mere seconds to a few minutes. For these programs, CP race detection uncovers races that are hard to detect by repeated execution and HB race detection: a single run of CP race detection produces several races not discovered by 10 separate rounds of happensbefore race detection.
Implementing Sparse FlowSensitive Andersen Analysis
, 2009
"... Andersen’s analysis is the most influential pointer analysis known so far. This paper, which contains parts of the author’s upcoming PhD thesis, for the first time presents a flowsensitive version of that analysis. We prove that the flowsensitive version still has the same cubic complexity. Thus, ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
Andersen’s analysis is the most influential pointer analysis known so far. This paper, which contains parts of the author’s upcoming PhD thesis, for the first time presents a flowsensitive version of that analysis. We prove that the flowsensitive version still has the same cubic complexity. Thus, the higher precision comes without loss of asymptotic scalability. This contradicts common wisdom of flowsensitivity being substantially more expensive. Compared to other flowsensitive pointer analyses, we have no expensive dataflow problem on the CFG. Instead, we simply propagate pointer targets along dataflow relations which we determine during the analysis. Our analysis in fact combines the computation of the interprocedural SSA dataflow representation and the uncovering of pointer targets. It also integrates the computation of controlflow relations. The analysis thus presents a new, sparse approach for the flowsensitive solution of the central problems for dataflow based program analyses. This paper also presents two extensions for higher precision. The first extension shows how the analysis can detect strong updates without increasing the complexity. The second extension describes a contextsensitive version which excludes unrealizable paths. Together this yields the first analysis of that precision which only has a complexity of O(n 4). This is a substantial improvement over the previous O(n 6) bound found by Landi. Thus, in summary this report describes several theoretical advances in the field of flowsensitive pointer analysis. It also provides details on the algorithms used for incremental SSA construction and contextsensitive pointer propagation.
Large Program Trace Analysis and Compression with
"... Prior work has shown that reduced, ordered, binary decision diagrams (BDDs) can be a powerful tool for program trace analysis and visualization. Unfortunately, it can take hours or days to encode large traces as BDDs. Further, techniques used to improve BDD performance are inapplicable to large dyna ..."
Abstract
 Add to MetaCart
Prior work has shown that reduced, ordered, binary decision diagrams (BDDs) can be a powerful tool for program trace analysis and visualization. Unfortunately, it can take hours or days to encode large traces as BDDs. Further, techniques used to improve BDD performance are inapplicable to large dynamic program traces. This paper explores the use of ZDDs for compressing dynamic trace data. Prior work has show that ZDDs can represent sparse data sets with less memory compared to BDDs. This paper demonstrates that (1) ZDDs do indeed provide greater compression for sets of dynamic traces (25 % smaller than BDDs on average), (2) with proper tuning, ZDDs encode sets of dynamic trace data over 9 × faster than BDDs, and (3) ZDDs can be used for all prior applications of BDDs for trace analysis and visualization. 1.
Computing the Least Fixpoint Semantics of Definite Logic Programs Using
, 2009
"... Abstract: We present the semantic foundations for computing the least fixpoint semantics of definite logic programs using only standard operations over boolean functions. More precisely, we propose a representation of sets of firstorder terms by boolean functions and a provably sound formulation o ..."
Abstract
 Add to MetaCart
Abstract: We present the semantic foundations for computing the least fixpoint semantics of definite logic programs using only standard operations over boolean functions. More precisely, we propose a representation of sets of firstorder terms by boolean functions and a provably sound formulation of intersection, union, and projection (an operation similar to restriction in relational databases) using conjunction, disjunction, and existential quantification. We report on a prototype implementation of a logic solver using Binary Decision Diagrams (BDDs) to represent boolean functions and compute the abovementioned three operations. This work paves the way for efficient solvers for particular classes of logic programs e.g., static program analyses, which leverage BDD technologies to factorise similarities in the solution space. Keywords: Semantics, binary decision diagrams, logic programs Calcul de la sémantique de plus petit pointfixe de programmes logiques définis en utilisant des BDDs Résumé: Nous présentons les fondements sémantiques nécessaires pour calculer la sémanique de plus petit pointfixe de programmes logiques définis, en utilisant uniquement des opérations standard sur les fonctions booléennes. Plus précisément, nous proposons une représentation d’ensembles de termes du premier ordre par des fonctions booléennes, et une formulation (prouvée correcte) de l’intersection, l’union et la projection (une opération similaire à la restriction dans les bases de données relationnelles) qui utilise la conjonction, la disjonction et la quantification existentielle. Nous rapportons les résultats d’un
Scaling context sensitive pointsto . . .
 TRANSACTIONS ON PROGRAMMING LANGUAGES AND SYSTEMS
"... ..."