Results 1  10
of
94
Dynamine: Finding Common Error Patterns by Mining Software Revision Histories
 In ESEC/FSE
, 2005
"... A great deal of attention has lately been given to addressing software bugs such as errors in operating system drivers or security bugs. However, there are many other lesser known errors specificto individual applications or APIs and these violations of applicationspecific coding rules are responsib ..."
Abstract

Cited by 165 (12 self)
 Add to MetaCart
(Show Context)
A great deal of attention has lately been given to addressing software bugs such as errors in operating system drivers or security bugs. However, there are many other lesser known errors specificto individual applications or APIs and these violations of applicationspecific coding rules are responsible for a multitude of errors. In this paper we propose DynaMine, a tool that analyzes source code checkins to find highly correlated method calls as well as common bug fixes in order to automatically discover applicationspecific coding patterns. Potential patterns discovered through mining are passed to a dynamic analysis tool for validation; finally, the results of dynamic analysis are presented to the user. The combination of revision history mining and dynamic analysis techniques leveraged in DynaMine proves effective for both discovering new applicationspecific patterns and for finding errors when applied to very large applications with many manyears of development and debugging effort behind them. We have analyzed Eclipse and jEdit, two widelyused, mature, highly extensible applications consisting of more than 3,600,000 lines of code combined. By mining revision histories, we have discovered 56 previously unknown, highly applicationspecific patterns. Out of these, 21 were dynamically confirmed as very likely valid patterns and a total of 263 pattern violations were found.
Scalable error detection using Boolean satisfiability
 In Proc. 32Ç È POPL. ACM
, 2005
"... We describe a software errordetection tool that exploits recent advances in boolean satisfiability (SAT) solvers. Our analysis is path sensitive, precise down to the bit level, and models pointers and heap data. Our approach is also highly scalable, which we achieve using two techniques. First, for ..."
Abstract

Cited by 118 (10 self)
 Add to MetaCart
(Show Context)
We describe a software errordetection tool that exploits recent advances in boolean satisfiability (SAT) solvers. Our analysis is path sensitive, precise down to the bit level, and models pointers and heap data. Our approach is also highly scalable, which we achieve using two techniques. First, for each program function, several optimizations compress the size of the boolean formulas that model the control and dataflow and the heap locations accessed by a function. Second, summaries in the spirit of type signatures are computed for each function, allowing interprocedural analysis without a dramatic increase in the size of the boolean constraints to be solved. We demonstrate the effectiveness of our approach by constructing a lock interface inference and checking tool. In an interprocedural analysis of more than 23,000 lock related functions in the latest Linux kernel, the checker generated 300 warnings, of which 179 were unique locking errors, a false positive rate of only 40%.
Abstraction refinement for termination
 In Proceedings of the 12 th International Static Analysis Symposium
"... Abstract. Abstraction can often lead to spurious counterexamples. Counterexampleguided abstraction refinement is a method of strengthening abstractions based on the analysis of these spurious counterexamples. For invariance properties, a counterexample is a finite trace that violates the invariant; ..."
Abstract

Cited by 65 (14 self)
 Add to MetaCart
(Show Context)
Abstract. Abstraction can often lead to spurious counterexamples. Counterexampleguided abstraction refinement is a method of strengthening abstractions based on the analysis of these spurious counterexamples. For invariance properties, a counterexample is a finite trace that violates the invariant; it is spurious if it is possible in the abstraction but not in the original system. When proving termination or other liveness properties of infinitestate systems, a useful notion of spurious counterexamples has remained an open problem. For this reason, no counterexampleguided abstraction refinement algorithm was known for termination. In this paper, we address this problem and present the first known automatic counterexampleguided abstraction refinement algorithm for termination proofs. We exploit recent results on transition invariants and transition predicate abstraction. We identify two reasons for spuriousness: abstractions that are too coarse, and candidate transition invariants that are too strong. Our counterexampleguided abstraction refinement algorithm successively weakens candidate transition invariants and refines the abstraction. 1
SATURN: A Scalable Framework for Error Detection Using Boolean Satisfiability
"... This article presents SATURN, a general framework for building precise and scalable static error detection systems. SATURN exploits recent advances in Boolean satisfiability (SAT) solvers and is path sensitive, precise down to the bit level, and models pointers and heap data. Our approach is also hi ..."
Abstract

Cited by 49 (0 self)
 Add to MetaCart
(Show Context)
This article presents SATURN, a general framework for building precise and scalable static error detection systems. SATURN exploits recent advances in Boolean satisfiability (SAT) solvers and is path sensitive, precise down to the bit level, and models pointers and heap data. Our approach is also highly scalable, which we achieve using two techniques. First, for each program function, several optimizations compress the size of the Boolean formulas that model the control flow and data flow and the heap locations accessed by a function. Second, summaries in the spirit of type signatures are computed for each function, allowing interprocedural analysis without a dramatic increase in the size of the Boolean constraints to be solved. We have experimentally validated our approach by conducting two case studies involving a Linux lock checker and a memory leak checker. Results from the experiments show that our system scales well, parallelizes well, and finds more errors with fewer false positives than previous static
A Survey of Automated Techniques for Formal Software Verification
 TRANSACTIONS ON CAD
, 2008
"... The software in an electronic system is often the greatest concern with respect to quality and design flaws. Formal verification tools can provide a guarantee that a design is free of specific flaws. We survey algorithms that perform automatic, static analysis of software to detect programming erro ..."
Abstract

Cited by 49 (5 self)
 Add to MetaCart
The software in an electronic system is often the greatest concern with respect to quality and design flaws. Formal verification tools can provide a guarantee that a design is free of specific flaws. We survey algorithms that perform automatic, static analysis of software to detect programming errors or prove their absence. The three techniques we consider are static analysis with abstract domains, model checking, and bounded model checking. We provide a short tutorial on the these techniques, highlighting their differences when applied to practical problems. We also survey the tools that are available implementing these techniques, and describe their merits and shortcomings.
Cogent: Accurate theorem proving for program verification
 Proceedings of CAV 2005, volume 3576 of Lecture Notes in Computer Science
, 2005
"... Abstract. Many symbolic software verification engines such as Slam and ESC/Java rely on automatic theorem provers. The existing theorem provers, such as Simplify, lack precise support for important programming language constructs such as pointers, structures and unions. This paper describes a theore ..."
Abstract

Cited by 41 (11 self)
 Add to MetaCart
(Show Context)
Abstract. Many symbolic software verification engines such as Slam and ESC/Java rely on automatic theorem provers. The existing theorem provers, such as Simplify, lack precise support for important programming language constructs such as pointers, structures and unions. This paper describes a theorem prover, Cogent, that accurately supports all ANSIC expressions. The prover’s implementation is based on a machinelevel interpretation of expressions into propositional logic, and supports finite machinelevel variables, bit operations, structures, unions, references, pointers and pointer arithmetic. When used by Slam during the model checking of over 300 benchmarks, Cogent’s improved accuracy reduced the number of Slam timeouts by half, increased the number of true errors found, and decreased the number of false errors. 1
Predicate Abstraction with AdjustableBlock Encoding
"... Abstract—Several successful software model checkers are based on a technique called singleblock encoding (SBE), which computes costly predicate abstractions after every single program operation. Largeblock encoding (LBE) computes abstractions only after a large number of operations, and it was sho ..."
Abstract

Cited by 32 (21 self)
 Add to MetaCart
(Show Context)
Abstract—Several successful software model checkers are based on a technique called singleblock encoding (SBE), which computes costly predicate abstractions after every single program operation. Largeblock encoding (LBE) computes abstractions only after a large number of operations, and it was shown that this significantly improves the verification performance. In this work, we present adjustableblock encoding (ABE), a unifying framework that allows to express both previous approaches. In addition, it provides the flexibility to specify any block size between SBE and LBE, and also beyond LBE, through the adjustment of one single parameter. Such a unification of different concepts makes it easier to understand the fundamental properties of the analysis, and makes the differences of the variants more explicit. We evaluate different configurations on example C programs, and identify one that is currently the best. I.
Consistency analysis of authorization hook placement in the Linux security modules framework
 ACM Transactions on Information and System Security (TISSEC
, 2004
"... We present a consistency analysis approach to assist the Linux community in verifying the correctness of authorization hook placement in the Linux Security Modules (LSM) framework. The LSM framework consists of a set of authorization hooks inserted into the Linux kernel to enable additional authoriz ..."
Abstract

Cited by 22 (8 self)
 Add to MetaCart
We present a consistency analysis approach to assist the Linux community in verifying the correctness of authorization hook placement in the Linux Security Modules (LSM) framework. The LSM framework consists of a set of authorization hooks inserted into the Linux kernel to enable additional authorizations to be performed (e.g., for mandatory access control). When compared to system call interposition, authorization within the kernel has both security and performance advantages, but it is more difficult to verify that placement of the LSM hooks ensures that all the kernel’s securitysensitive operations are authorized. Static analysis has been used previously to verified mediation (i.e., that some hook mediates access to a securitysensitive operation), but that work did not determine whether the necessary set of authorizations were checked. In this paper, we develop an approach to test the consistency of the relationships between securitysensitive operations and LSM hooks. The idea is that whenever a securitysensitive operation is performed as part of specifiable event, a particular set of LSM hooks must have mediated that operation. This work demonstrates that the number of events that impact consistency is manageable and that the notion of consistency is useful for verifying correctness. We describe our consistency approach for performing verification, the implementation of runtime tools that implement this approach, the
Symbolic model checking for asynchronous boolean programs
 in SPIN
, 2005
"... Abstract. Software model checking problems generally contain two different types of nondeterminism: 1) nondeterministically chosen values; 2) the choice of interleaving among threads. Most modern software model checkers can handle only one source of nondeterminism efficiently, but not both. This ..."
Abstract

Cited by 21 (7 self)
 Add to MetaCart
(Show Context)
Abstract. Software model checking problems generally contain two different types of nondeterminism: 1) nondeterministically chosen values; 2) the choice of interleaving among threads. Most modern software model checkers can handle only one source of nondeterminism efficiently, but not both. This paper describes a SATbased model checker for asynchronous Boolean programs that handles both sources effectively. We address the first type of nondeterminism with a form of symbolic execution and fixpoint detection. We address the second source of nondeterminism using a symbolic and dynamic partialorder reduction, which is implemented inside the SATsolver’s casesplitting algorithm. The preliminary experimental results show that the new algorithm outperforms the existing software model checkers on large benchmarks. 1
A summary of intrinsic partitioning verification
 In In Proceedings of the Fifth International Workshop on the ACL2 Theorem Prover and Its Applications (ACL2
, 2004
"... Successful formal methods applications have four characteristics: intrinsically important applications, concise correctness theorems, validated models, and proof automation. We describe a recentlycompleted verification of a microprocessor's intrinsic partitioning mechanism in those terms. What ..."
Abstract

Cited by 20 (4 self)
 Add to MetaCart
(Show Context)
Successful formal methods applications have four characteristics: intrinsically important applications, concise correctness theorems, validated models, and proof automation. We describe a recentlycompleted verification of a microprocessor's intrinsic partitioning mechanism in those terms. What Makes for a Good Application of Formal Methods? Formal methods is the application of mathematical reasoning to establish properties about digital systems. Formal methods can be applied in many different ways with many different notations and tools. They can deal with system models that describe the lowest level of implementation or the most abstract requirements, with properties to be proved that may be comprehensive descriptions of “correctness ” or minor aspects that indicate good system development. Despite the wide range of formal methods applications, we observe that successful formal methods projects share four characteristics. 1. The target being analyzed is intrinsically important. Formal methods can provide a high level of certainty about a target, but the extra assurance must be worth the effort that formal verification usually entails. Three applications of formal methods that we consider successful are Microsoft’s SLAM project [Ball2004], AMD’s floatingpoint verification [Russinoff2000], and Rockwell Collins ’ requirements validation [Miller2004]. The SLAM project aims to reduce crashes of Microsoft’s Windows OS by proving important device driver behaviors. AMD’s floatingpoint work seeks to eliminate errors in the floatingpoint units on AMD's x86 microprocessors. Rockwell Collins is applying modelchecking to help validate requirements for safetycritical systems. Each of these applications of formal methods is solving a problem that is important enough to justify an extra effort. 2. The target’s desired behavior has a concise and understandable formalization. An important indicator of successful formal methods application is the degree to which the description of the needed property is compelling. A proved theorem only increases assurance about a target of evaluation if we trust in the formalization of the desired