Results 1 
6 of
6
NORA/HAMMR: Making DeductionBased Software Component Retrieval Practical
, 1997
"... Deductionbased software component retrieval uses preand postconditions as indexes and search keys and an automated theorem prover (ATP) to check whether a component matches. This idea is very simple but the vast number of arising proof tasks makes a practical implementation very hard. We thus pass ..."
Abstract

Cited by 40 (4 self)
 Add to MetaCart
(Show Context)
Deductionbased software component retrieval uses preand postconditions as indexes and search keys and an automated theorem prover (ATP) to check whether a component matches. This idea is very simple but the vast number of arising proof tasks makes a practical implementation very hard. We thus pass the components through a chain of filters of increasing deductive power. In this chain, rejection filters based on signature matching and model checking techniques are used to rule out nonmatches as early as possible and to prevent the subsequent ATP from "drowning." Hence, intermediate results of reasonable precision are available at (almost) any time of the retrieval process. The final ATP step then works as a confirmation filter to lift the precision of the answer set. We implemented a chain which runs fully automatically and uses MACE for model checking and the automated prover SETHEO as confirmation filter. We evaluated the system over a mediumsized collection of components. The resul...
Synthesizing certified code
 Proc. Intl. Symp. Formal Methods Europe 2002: Formal Methods—Getting IT Right, LNCS 2391
, 2002
"... Abstract. Code certification is a lightweight approach for formally demonstrating software quality. Its basic idea is to require code producers to provide formal proofs that their code satisfies certain quality properties. These proofs serve as certificates that can be checked independently. Since c ..."
Abstract

Cited by 30 (16 self)
 Add to MetaCart
(Show Context)
Abstract. Code certification is a lightweight approach for formally demonstrating software quality. Its basic idea is to require code producers to provide formal proofs that their code satisfies certain quality properties. These proofs serve as certificates that can be checked independently. Since code certification uses the same underlying technology as program verification, it requires detailed annotations (e.g., loop invariants) to make the proofs possible. However, manually adding annotations to the code is timeconsuming and errorprone. We address this problem by combining code certification with automatic program synthesis. Given a highlevel specification, our approach simultaneously generates code and all annotations required to certify the generated code. We describe a certification extension of AutoBayes, a synthesis tool for automatically generating data analysis programs. Based on builtin domain knowledge, proof annotations are added and used to generate proof obligations that are discharged by the automated theorem prover ESETHEO. We demonstrate our approach by certifying operator and memorysafety on a dataclassification program. For this program, our approach was faster and more precise than PolySpace, a commercial static analysis tool.
A Taxonomy of Parallel Strategies for Deduction
 Annals of Mathematics and Artificial Intelligence
, 1999
"... This paper presents a taxonomy of parallel theoremproving methods based on the control of search (e.g., masterslaves versus peer processes), the granularity of parallelism (e.g., fine, medium and coarse grain) and the nature of the method (e.g., orderingbased versus subgoalreduction) . We anal ..."
Abstract

Cited by 17 (1 self)
 Add to MetaCart
(Show Context)
This paper presents a taxonomy of parallel theoremproving methods based on the control of search (e.g., masterslaves versus peer processes), the granularity of parallelism (e.g., fine, medium and coarse grain) and the nature of the method (e.g., orderingbased versus subgoalreduction) . We analyze how the di#erent approaches to parallelization a#ect the control of search: while fine and mediumgrain methods, as well as masterslaves methods, generally do not modify the sequential search plan, parallelsearch methods may combine sequential search plans (multisearch) or extend the search plan with the capability of subdividing the search space (distributed search). Precisely because the search plan is modified, the latter methods may produce radically di#erent searches than their sequential base, as exemplified by the first distributed proof of the Robbins theorem generated by the Modified ClauseDi#usion prover Peersmcd. An overview of the state of the field and directions...
Generating Lemmas for Tableaubased Proof Search Using Genetic Programming
"... Topdown or analytical provers based on the connection tableau calculus are rather powerful, yet have notable shortcomings regarding redundancy control. A wellknown and successful technique for alleviating these shortcomings is the use of lemmas. We propose to use genetic programming to evolve usef ..."
Abstract
 Add to MetaCart
(Show Context)
Topdown or analytical provers based on the connection tableau calculus are rather powerful, yet have notable shortcomings regarding redundancy control. A wellknown and successful technique for alleviating these shortcomings is the use of lemmas. We propose to use genetic programming to evolve useful lemmas through an interleaved process of topdown goal decomposition and bottomup lemma generation. Experimental studies show that our method compares very favorably with existing methods, improving on run time and on the number of solvable problems. 1
A taxonomy of parallel strategies for deduction ∗
"... This paper presents a taxonomy of parallel theoremproving methods based on the control of search (e.g., masterslaves versus peer processes), the granularity of parallelism (e.g., fine, medium and coarse grain) and the nature of the method (e.g., orderingbased versus subgoalreduction). We analyze ..."
Abstract
 Add to MetaCart
(Show Context)
This paper presents a taxonomy of parallel theoremproving methods based on the control of search (e.g., masterslaves versus peer processes), the granularity of parallelism (e.g., fine, medium and coarse grain) and the nature of the method (e.g., orderingbased versus subgoalreduction). We analyze how the different approaches to parallelization affect the control of search: while fine and mediumgrain methods, as well as masterslaves methods, generally do not modify the sequential search plan, parallelsearch methods may combine sequential search plans (multisearch) or extend the search plan with the capability of subdividing the search space (distributed search). Precisely because the search plan is modified, the latter methods may produce radically different searches than their sequential base, as exemplified by the first distributed proof of the Robbins theorem generated by the Modified ClauseDiffusion prover Peersmcd. An overview of the state of the field and directions for future research conclude the paper. 1.
Lemma Generation for Model Elimination by Combining TopDown and BottomUp Inference
"... A very promising approach for integrating topdown and bottomup proof search is the use of bottomup generated lemmas in topdown provers. When generating lemmas, however) the currently used lemma generation procedures suffer from the wellknown problems of forward reasoning methods, e.g., the proof ..."
Abstract
 Add to MetaCart
(Show Context)
A very promising approach for integrating topdown and bottomup proof search is the use of bottomup generated lemmas in topdown provers. When generating lemmas, however) the currently used lemma generation procedures suffer from the wellknown problems of forward reasoning methods, e.g., the proof goal is ignored. In order to overcome these problems we propose two relevancybased lemma generation methods for topdown provers. The first approach employs a bottomup level saturation procedure controlled by topdown generated patterns which represent promising subgoals. The second approach uses evolutionary search and provides a selfadaptive control of lemma generation and goal decomposition. 1