Results 1  10
of
11
Random testing in isabelle/hol
 Software Engineering and Formal Methods (SEFM 2004
, 2004
"... When developing nontrivial formalizations in a theorem prover, a considerable amount of time is devoted to “debugging ” specifications and conjectures by failed proof attempts. To detect such problems early in the proof and save development time, we have extended the Isabelle theorem prover with a ..."
Abstract

Cited by 42 (2 self)
 Add to MetaCart
When developing nontrivial formalizations in a theorem prover, a considerable amount of time is devoted to “debugging ” specifications and conjectures by failed proof attempts. To detect such problems early in the proof and save development time, we have extended the Isabelle theorem prover with a tool for testing specifications by evaluating propositions under an assignment of random values to free variables. Distribution of the test data is optimized via mutation testing. The technical contributions are an extension of earlier work with inductive definitions and a generic method for randomly generating elements of recursive datatypes. 1.
Functional Testing in the Focal environment
"... Abstract. This article presents the generation and test case execution under the framework Focal. In the programming language Focal, all properties of the program are written within the source code. These properties are considered, here, as the program specification. We are interested in testing the ..."
Abstract

Cited by 8 (3 self)
 Add to MetaCart
Abstract. This article presents the generation and test case execution under the framework Focal. In the programming language Focal, all properties of the program are written within the source code. These properties are considered, here, as the program specification. We are interested in testing the code against these properties. Testing a property is split in two stages. First, the property is cut out in several elementary properties. An elementary property is a tuple composed of some preconditions and a conclusion. Lastly, each elementary property is tested separately. The preconditions are used to generate and select the test cases randomly. The conclusion allows us to compute the verdict. All the testing process is done automatically. 1
Random testing in PVS
 In: Workshop on Automated Formal Methods (AFM
, 2006
"... Abstract. Formulas are difficult to formulate and to prove, and are often invalid during specification development. Testing formulas prior to attempting any proofs could potentially save a lot of effort. Here we describe an implementation of random testing in the PVS verification system. 1 ..."
Abstract

Cited by 5 (1 self)
 Add to MetaCart
Abstract. Formulas are difficult to formulate and to prove, and are often invalid during specification development. Testing formulas prior to attempting any proofs could potentially save a lot of effort. Here we describe an implementation of random testing in the PVS verification system. 1
Verifying haskell programs by combining testing and proving
 In Proceedings of the Third International Conference on Quality Software
"... We propose a method for improving confidence in the correctness of Haskell programs by combining testing and proving. Testing is used for debugging programs and specification before a costly proof attempt. During a proof development, testing also quickly eliminates wrong conjectures. Proving helps u ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
We propose a method for improving confidence in the correctness of Haskell programs by combining testing and proving. Testing is used for debugging programs and specification before a costly proof attempt. During a proof development, testing also quickly eliminates wrong conjectures. Proving helps us to decompose a testing task in a way that is guaranteed to be correct. To demonstrate the method we have extended the Agda/Alfa proof assistant for dependent type theory with a tool for random testing. As an example we show how the correctness of a BDDalgorithm written in Haskell is verified by testing properties of component functions. We also discuss faithful translations from Haskell to type theory.
Constraint Reasoning in FocalTest
"... Abstract. Program testing implies selecting test data from the program input space. In many cases, test data satisfying userspecified properties or preconditions (a.k.a. positive test data) are required. However, current automatic test data generation techniques adopt direct generateandtest approa ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
Abstract. Program testing implies selecting test data from the program input space. In many cases, test data satisfying userspecified properties or preconditions (a.k.a. positive test data) are required. However, current automatic test data generation techniques adopt direct generateandtest approaches for this task. In FocalTest, the testing tool of the Focal correctbyconstruction environment for developing certified objectoriented functional programs, test data are generated at random and rejected when they do not satisfy preconditions. In this paper, we improve FocalTest with a testandgenerate approach, through the usage of constraint reasoning. A particular difficulty is the handling of function calls in the preconditions as they require constraint reasoning on conditionals and pattern matching which introduce disjunctions in constraint systems. Our experimental results show that a nonnaive implementation of constraint reasoning on these constructions outperform traditional generation techniques when used to find test data for testing properties. 1
Automatic Proof and Disproof in Isabelle/HOL
, 2011
"... Isabelle/HOL is a popular interactive theorem prover based on higherorder logic. It owes its success to its ease of use and powerful automation. Much of the automation is performed by external tools: The metaprover Sledgehammer relies on resolution provers and SMT solvers for its proof search, the c ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
Isabelle/HOL is a popular interactive theorem prover based on higherorder logic. It owes its success to its ease of use and powerful automation. Much of the automation is performed by external tools: The metaprover Sledgehammer relies on resolution provers and SMT solvers for its proof search, the counterexample generator Quickcheck uses the ML compiler as a fast evaluator for ground formulas, and its rival Nitpick is based on the model finder Kodkod, which performs a reduction to SAT. Together with the Isar structured proof format and a new asynchronous user interface, these tools have radically transformed the Isabelle user experience. This paper provides an overview of the main automatic proof and disproof tools.
Testing Noninterference, Quickly
"... Informationflow control mechanisms are difficult to design and labor intensive to prove correct. To reduce the time wasted on doomed proofs for broken definitions, we advocate modern random testing techniques for finding counterexamples during the design process. We show how to use QuickCheck, a pr ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
Informationflow control mechanisms are difficult to design and labor intensive to prove correct. To reduce the time wasted on doomed proofs for broken definitions, we advocate modern random testing techniques for finding counterexamples during the design process. We show how to use QuickCheck, a propertybased randomtesting tool, to guide the design of a simple informationflow abstract machine. We find that both sophisticated strategies for generating welldistributed random programs and readily falsifiable formulations of noninterference properties are critically important. We propose several approaches and evaluate their effectiveness on a collection of injected bugs of varying subtlety. We also present an effective technique for shrinking large counterexamples to minimal, easily comprehensible ones. Taken together, our best methods enable us to quickly and automatically generate simple counterexamples for all these bugs.
Supporting Dependently Typed Functional Programming with Testing and UserAssisted Proof Automation
"... Abstract. Developing dependently typed functional programs can be difficult because the user may be required to write proofs and program errors are often hard to identify and fix. We describe a framework, implemented in Coq, that combines testing with userassisted proof automation to make developme ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
Abstract. Developing dependently typed functional programs can be difficult because the user may be required to write proofs and program errors are often hard to identify and fix. We describe a framework, implemented in Coq, that combines testing with userassisted proof automation to make development easier. Testing occurs within Coq and is used to give user feedback to program errors and faulty conjectures, as well as guiding automated proof search. Dependently typed functional programming languages, such as Epigram [1] and ATS [2], offer an approach for developing verified software. These languages use dependent types to assign more accurate typing to terms, compared to simple typing, thereby enabling program properties to be verified at compiletime. However, programming with dependent types can be difficult. The user can be expected to write proofs for the proof obligations that arise and program errors can be hard to identify and fix. In this paper, we describe techniques that use a combination of testing and userassisted proof automation for making dependently typed functional programming easier. These techniques are generic enough to support userdefined types and functions. We have implemented our ideas in the Coq theorem prover [3]. Coq can be used as a dependently typed programming language. The contributions of this paper are: – A description of how to provide useful counterexamplebased program error feedback (see Section 2). – A description of the important role of testing in our userassisted proof automation (see Section 3). – A smallscale usability study examining the utility of our counterexamplebased program error feedback (see Section 4). 1
The New Quickcheck for Isabelle Random, Exhaustive and Symbolic Testing Living Under One Roof
"... Abstract. The new Quickcheck is a counterexample generator for Isabelle/HOL that uncovers faulty specifications and invalid conjectures using various testing strategies. The previous Quickcheck only tested conjectures by random testing. The new Quickcheck extends the previous one and integrates two ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Abstract. The new Quickcheck is a counterexample generator for Isabelle/HOL that uncovers faulty specifications and invalid conjectures using various testing strategies. The previous Quickcheck only tested conjectures by random testing. The new Quickcheck extends the previous one and integrates two novel testing strategies: exhaustive testing with concrete values; and symbolic testing, evaluating conjectures with a narrowing strategy. Orthogonally to the strategies, we address two general issues: First, we extend the class of executable conjectures and specifications, and second, we present techniques to deal with conditional conjectures, i.e., conjectures with restrictive premises. We evaluate the testing strategies and techniques on a number of specifications, functional data structures and a hotel key card system. 1
Testing FirstOrder Logic Axioms in Program Verification
"... Abstract. Program verification systems based on automated theorem provers rely on userprovided axioms in order to verify domainspecific properties of code. However, formulating axioms correctly (that is, formalizing properties of an intended mathematical interpretation) is nontrivial in practice, ..."
Abstract
 Add to MetaCart
Abstract. Program verification systems based on automated theorem provers rely on userprovided axioms in order to verify domainspecific properties of code. However, formulating axioms correctly (that is, formalizing properties of an intended mathematical interpretation) is nontrivial in practice, and avoiding or even detecting unsoundness can sometimes be difficult to achieve. Moreover, speculating soundness of axioms based on the output of the provers themselves is not easy since they do not typically give counterexamples. We adopt the idea of modelbased testing to aid axiom authors in discovering errors in axiomatizations. To test the validity of axioms, users define a computational model of the axiomatized logic by giving interpretations to the function symbols and constants in a simple declarative programming language. We have developed an axiom testing framework that helps automate model definition and test generation using offtheshelf tools for metaprogramming, propertybased random testing, and constraint solving. We have experimented with our tool to test the axioms used in AutoCert, a program verification system that has been applied to verify aerospace flight code using a firstorder axiomatization of navigational concepts, and were able to find counterexamples for a number of axioms. Key words: modelbased testing, program verification, automated theorem proving, propertybased testing, constraint solving