Results 1  10
of
45
Lazy Satisfiability Modulo Theories
 JOURNAL ON SATISFIABILITY, BOOLEAN MODELING AND COMPUTATION 3 (2007) 141Â224
, 2007
"... Satisfiability Modulo Theories (SMT) is the problem of deciding the satisfiability of a firstorder formula with respect to some decidable firstorder theory T (SMT (T)). These problems are typically not handled adequately by standard automated theorem provers. SMT is being recognized as increasingl ..."
Abstract

Cited by 161 (45 self)
 Add to MetaCart
(Show Context)
Satisfiability Modulo Theories (SMT) is the problem of deciding the satisfiability of a firstorder formula with respect to some decidable firstorder theory T (SMT (T)). These problems are typically not handled adequately by standard automated theorem provers. SMT is being recognized as increasingly important due to its applications in many domains in different communities, in particular in formal verification. An amount of papers with novel and very efficient techniques for SMT has been published in the last years, and some very efficient SMT tools are now available. Typical SMT (T) problems require testing the satisfiability of formulas which are Boolean combinations of atomic propositions and atomic expressions in T, so that heavy Boolean reasoning must be efficiently combined with expressive theoryspecific reasoning. The dominating approach to SMT (T), called lazy approach, is based on the integration of a SAT solver and of a decision procedure able to handle sets of atomic constraints in T (Tsolver), handling respectively the Boolean and the theoryspecific components of reasoning. Unfortunately, neither the problem of building an efficient SMT solver, nor even that
Back to the Future  Revisiting Precise Program Verification using SMT Solvers
 POPL'08
, 2008
"... This paper takes a fresh look at the problem of precise verification of heapmanipulating programs using firstorder SatisfiabilityModuloTheories (SMT) solvers. We augment the specification logic of such solvers by introducing the Logic of Interpreted Sets and Bounded Quantification for specifying ..."
Abstract

Cited by 73 (15 self)
 Add to MetaCart
This paper takes a fresh look at the problem of precise verification of heapmanipulating programs using firstorder SatisfiabilityModuloTheories (SMT) solvers. We augment the specification logic of such solvers by introducing the Logic of Interpreted Sets and Bounded Quantification for specifying properties of heapmanipulating programs. Our logic is expressive, closed under weakest preconditions, and efficiently implementable on top of existing SMT solvers. We have created a prototype implementation of our logic over the solvers SIMPLIFY and Z3 and used our prototype to verify many programs. Our preliminary experience is encouraging; the completeness and the efficiency of the decision procedure is clearly evident in practice and has greatly improved the user experience of the verifier.
Relational Inductive Shape Analysis
 in "ACM SIGPLANSIGACT Symposium on Principles of Programming Languages
, 2008
"... Shape analyses are concerned with precise abstractions of the heap to capture detailed structural properties. To do so, they need to build and decompose summaries of disjoint memory regions. Unfortunately, many data structure invariants require relations be tracked across disjoint regions, such as i ..."
Abstract

Cited by 69 (11 self)
 Add to MetaCart
Shape analyses are concerned with precise abstractions of the heap to capture detailed structural properties. To do so, they need to build and decompose summaries of disjoint memory regions. Unfortunately, many data structure invariants require relations be tracked across disjoint regions, such as intricate numerical data invariants or structural invariants concerning back and cross pointers. In this paper, we identify issues inherent to analyzing relational structures domain for pure data properties and by usersupplied specifications of the data structure invariants to check. Particularly, it supports hybrid invariants about shape and data and features a generic mechanism for materializing summaries at the beginning, middle, or end of inductive structures. Around this domain, we build a shape analysis whose interesting components include a preanalysis on the usersupplied specifications that guides the abstract interpretation and a widening operator over the combined shape and data domain. We then demonstrate our techniques on the proof of preservation of the redblack tree invariants during insertion.
Unifying Type Checking and property checking for lowlevel code
, 2009
"... We present a unified approach to type checking and property checking for lowlevel code. Type checking for lowlevel code is challenging because type safety often depends on complex, programspecific invariants that are difficult for traditional type checkers to express. Conversely, property checking ..."
Abstract

Cited by 42 (14 self)
 Add to MetaCart
(Show Context)
We present a unified approach to type checking and property checking for lowlevel code. Type checking for lowlevel code is challenging because type safety often depends on complex, programspecific invariants that are difficult for traditional type checkers to express. Conversely, property checking for lowlevel code is challenging because it is difficult to write concise specifications that distinguish between locations in an untyped program’s heap. We address both problems simultaneously by implementing a type checker for lowlevel code as part of our property checker. We present a lowlevel formalization of a C program’s heap and its types that can be checked with an SMT solver, and we provide a decision procedure for checking type safety. Our type system is flexible enough to support a combination of nominal and structural subtyping for C, on a perstructure basis. We discuss several case studies that demonstrate the ability of this tool to express and check complex type invariants in lowlevel C code, including several small Windows device drivers.
Specification and Verification: The Spec# Experience
, 2009
"... Spec# is a programming system that puts specifications in the hands of programmers and includes tools that use them. The system includes an objectoriented programming language with specification constructs, a compiler that emits executable code and runtime checks for specifications, a programming ..."
Abstract

Cited by 35 (6 self)
 Add to MetaCart
Spec# is a programming system that puts specifications in the hands of programmers and includes tools that use them. The system includes an objectoriented programming language with specification constructs, a compiler that emits executable code and runtime checks for specifications, a programming methodology that gives rules for structuring programs and for using specifications, and a static program verifier that attempts to mathematically prove the correctness of programs. This paper reflects on the sixyear experience of building and using Spec#, the scientific contributions of the project, remaining challenges for tools that seek to establish program correctness, and prospects of incorporating program verification into everyday software engineering.
This is Boogie 2
, 2008
"... Boogie is an intermediate verification language, designed to make the prescription of verification conditions natural and convenient. It serves as a common intermediate representation for static program verifiers of various source languages, and it abstracts over the interfaces to various theorem p ..."
Abstract

Cited by 32 (5 self)
 Add to MetaCart
(Show Context)
Boogie is an intermediate verification language, designed to make the prescription of verification conditions natural and convenient. It serves as a common intermediate representation for static program verifiers of various source languages, and it abstracts over the interfaces to various theorem provers. Boogie interpretation and predicate abstraction. As a language, Boogie has both mathematical and imperative components. The imperative components of Boogie specify sets of execution traces, the states of which are described and constrained by the mathematical components. The language includes features like parametric polymorphism, partial orders, logical quantifications, nondeterminism, total expressions, partial statements, and flexible control flow. The Boogie language was previously known as BoogiePL. This paper is a reference manual for Boogie version 2.
A Polymorphic Intermediate Verification Language: Design and Logical Encoding
"... Intermediate languages are a paradigm to separate concerns in software verification systems when bridging the gap between programming languages and the logics understood by theorem provers. While such intermediate languages traditionally only offer rather simple type systems, this paper argues that ..."
Abstract

Cited by 32 (3 self)
 Add to MetaCart
Intermediate languages are a paradigm to separate concerns in software verification systems when bridging the gap between programming languages and the logics understood by theorem provers. While such intermediate languages traditionally only offer rather simple type systems, this paper argues that it is both advantageous and feasible to integrate richer type systems with features like (higherranked) polymorphism and quantification over types. As a concrete solution, the paper presents the type system of Boogie 2, an intermediate verification language that is used in several program verifiers. The paper gives two encodings of types and formulae in simply typed logic such that SMT solvers and other theorem provers can be used to discharge verification conditions.
Automating Separation Logic Using SMT
"... Abstract. Separation logic (SL) has gained widespread popularity because of its ability to succinctly express complex invariants of a program’s heap configurations. Several specialized provers have been developed for decidable SL fragments. However, these provers cannot be easily extended or combine ..."
Abstract

Cited by 19 (0 self)
 Add to MetaCart
(Show Context)
Abstract. Separation logic (SL) has gained widespread popularity because of its ability to succinctly express complex invariants of a program’s heap configurations. Several specialized provers have been developed for decidable SL fragments. However, these provers cannot be easily extended or combined with solvers for other theories that are important in program verification, e.g., linear arithmetic. In this paper, we present a reduction of decidable SL fragments to a decidable firstorder theory that fits well into the satisfiability modulo theories (SMT) framework. We show how to use this reduction to automate satisfiability, entailment, frame inference, and abduction problems for separation logic using SMT solvers. Our approach provides a simple method of integrating separation logic into existing verification tools that provide SMT backends, and an elegant way of combining SL fragments with other decidable firstorder theories. We implemented this approach in a verification tool and applied it to heapmanipulating programs whose verification involves reasoning in theory combinations.
A scalable memory model for lowlevel code
 In Conf. on Verification, Model Checking and Abstract Interpretation (VMCAI
, 2009
"... Abstract. Because of its critical importance underlying all other software, lowlevel system software is among the most important targets for formal verification. Lowlevel systems software must sometimes make typeunsafe memory accesses, but because of the vast size of available heap memory in today ..."
Abstract

Cited by 16 (2 self)
 Add to MetaCart
(Show Context)
Abstract. Because of its critical importance underlying all other software, lowlevel system software is among the most important targets for formal verification. Lowlevel systems software must sometimes make typeunsafe memory accesses, but because of the vast size of available heap memory in today’s computer systems, faithfully representing each memory allocation and access does not scale when analyzing large programs. Instead, verification tools rely on abstract memory models to represent the program heap. This paper reports on two related investigations to develop an accurate (i.e., providing a useful level of soundness and precision) and scalable memory model: First, we compare a recently introduced memory model, specifically designed to more accurately model lowlevel memory accesses in systems code, to an older, widely adopted memory model. Unfortunately, we find that the newer memory model scales poorly compared to the earlier, less accurate model. Next, we investigate how to improve the soundness of the less accurate model. A direct approach is to add assertions to the code that each memory access does not break the assumptions of the memory model, but this causes verification complexity to blowup. Instead, we develop a novel, extremely lightweight static analysis that quickly and conservatively guarantees that most memory accesses safely respect the assumptions of the memory model, thereby eliminating almost all of these extra typechecking assertions. Furthermore, this analysis allows us to create automatically memory models that flexibly use the more scalable memory model for most of memory, but resorting to a more accurate model for memory accesses that might need it. 1