### Table 3: Summary of Propositional Logic Theorem Provers

1996

"... In PAGE 30: ....1.1.5 Summary of Propositional logic theorem proving The following Table3 is a summary of the theorem provers just described. The information is based on readings from [17, 19, 16, 5, 12]... ..."

### Table 1: Consistency checking for theorem provers and model builders

1999

"... In PAGE 10: ... Model building o ers a partial solution to this problem: as well as calling the theorem prover with input : , simultaneously call the model builder with input . In practice, this should successfully deal with many of the formulas the theorem prover can apos;t handle, as is shown in Table1 . Here the top row lists possible responses from the theorem prover to : , while the left hand column lists possible responses of the model builder to .... ..."

Cited by 16

### Table 4: Performance of the overall best theorem prover on individual classes of the RTE dataset.

2005

"... In PAGE 6: ... We report the raw accuracy and the con dence weighted score (CWS) in Table 3.5 Table4 shows the performance of the theorem prover split by RTE example class (as illustrated... In PAGE 7: ...3%.) Interestingly, the performance varies heavily by class, (see Table4 ), possibly indicating that some classes are inherently more dif cult. The baseline accuracy is close to random guessing, and the difference between our system performance and the baseline performance on the test set is statistically signi - cant (p lt; 0:02).... In PAGE 7: ... Since our logical formu- lae essentially restate the information in the dependency graph, our abductive inference and learning algorithms are not tied to the logical representation; in particular, the inference algorithm can be modi ed to work with these graph-based representations, where it can be interpreted as a graph-matching procedure that prefers globally consis- tent matchings. Table4 shows that certain classes require more effort in linguistic modeling, and improvements in those classes can lead to great overall gains in performance. The current rep- resentation fails to capture some important interactions in its dependencies (e.... ..."

Cited by 14

### Table 4: Performance of the overall best theorem prover on individual classes of the RTE dataset.

2005

"... In PAGE 6: ... We report the raw accuracy and the confidence weighted score (CWS) in Table 3.5 Table4 shows the performance of the theorem prover split by RTE example class (as illustrated... In PAGE 7: ...3%.) Interestingly, the performance varies heavily by class, (see Table4 ), possibly indicating that some classes are inherently more difficult. The baseline accuracy is close to random guessing, and the difference between our system performance and the baseline performance on the test set is statistically signifi- cant (p lt; 0.... In PAGE 7: ... Since our logical formu- lae essentially restate the information in the dependency graph, our abductive inference and learning algorithms are not tied to the logical representation; in particular, the inference algorithm can be modified to work with these graph-based representations, where it can be interpreted as a graph-matching procedure that prefers globally consis- tent matchings. Table4 shows that certain classes require more effort in linguistic modeling, and improvements in those classes can lead to great overall gains in performance. The current rep- resentation fails to capture some important interactions in its dependencies (e.... ..."

Cited by 14

### Table 9: Theoretical Criteria And Automated Reasoning Tools. Criterion Theorem Prover Model Generator

1998

"... In PAGE 9: ...In this paper we outlined a theory building method- ology that is based on the use of standard rst-order logic, and of existing automated reasoning tools. The logic provides us with a number of criteria that can be tested for using computational tools, such as con- sistency, soundness, falsi ability, and contingency (see Table9 ). In principle, each criterion can be tested for by both theorem proving and model generation strate- gies, for example, a theorem is also sound if it holds in all models of the premise set, or a theory is consis- tent if the deductive closure of the premise set does not contain a contradiction.... ..."

Cited by 4

### Table 9: Theoretical Criteria And Automated Reasoning Tools. Criterion Theorem Prover Model Generator

"... In PAGE 9: ...In this paper we outlined a theory building method- ology that is based on the use of standard first-order logic, and of existing automated reasoning tools. The logic provides us with a number of criteria that can be tested for using computational tools, such as con- sistency, soundness, falsifiability, and contingency (see Table9 ). In principle, each criterion can be tested for by both theorem proving and model generation strate- gies, for example, a theorem is also sound if it holds in all models of the premise set, or a theory is consis- tent if the deductive closure of the premise set does not contain a contradiction.... ..."

### Table 1: How Theorem Provers Store Theorems This paper proposes a method by which a theorem prover can use digital signatures to detect any modi cations made to a theorem while outside the system. Using this method it is no longer necessary to store theorems toget- her with their proofs in order to ensure the security of the theorem prover. Furthermore it is perfectly safe to store theorems in a documented format. These two features make the method an ideal basis for exchanging results between di erent proof tools.

1996

"... In PAGE 3: ... Furthermore, this technique can not be used to share results with other systems, since proofs can usually be checked only by the system in which they were developed. Table1 pre- sents a summary of the methods by which various theorem provers store and reuse results.... ..."

### Table 1: Some sample proof times for areas of different sizes with squared tiles. Absolute times could be probably improved by using a propositional theorem prover.

Cited by 32

### Table 1: Some sample proof times for areas of different sizes with squared tiles. Absolute times could be probably improved by using a propositional theorem prover.

Cited by 32

### Table 2. Race-checking benchmarks. LOC is lines of code. The number of outer it- erations is the number of times the environment assumptions are reset to false. The number of inner iterations is the number of times the environment assumptions are updated after the last time they are reset to false. Theorem prover queries is the to- tal number of theorem prover queries. Time is the total running time for the tool in seconds on a 700MHz Linux PC with 1GB RAM.

2003

"... In PAGE 12: ... In each case, our tool was able to detect the absence or pres- ence of races correctly in a few seconds. Table2 shows the results of running Blast on the three benchmarks. Device drivers.... ..."

Cited by 40