### Table 3: Summary of Propositional Logic Theorem Provers

1996

"... In PAGE 30: ....1.1.5 Summary of Propositional logic theorem proving The following Table3 is a summary of the theorem provers just described. The information is based on readings from [17, 19, 16, 5, 12]... ..."

### Table 9: Theoretical Criteria And Automated Reasoning Tools. Criterion Theorem Prover Model Generator

1998

"... In PAGE 9: ...In this paper we outlined a theory building method- ology that is based on the use of standard rst-order logic, and of existing automated reasoning tools. The logic provides us with a number of criteria that can be tested for using computational tools, such as con- sistency, soundness, falsi ability, and contingency (see Table9 ). In principle, each criterion can be tested for by both theorem proving and model generation strate- gies, for example, a theorem is also sound if it holds in all models of the premise set, or a theory is consis- tent if the deductive closure of the premise set does not contain a contradiction.... ..."

Cited by 4

### Table 9: Theoretical Criteria And Automated Reasoning Tools. Criterion Theorem Prover Model Generator

"... In PAGE 9: ...In this paper we outlined a theory building method- ology that is based on the use of standard first-order logic, and of existing automated reasoning tools. The logic provides us with a number of criteria that can be tested for using computational tools, such as con- sistency, soundness, falsifiability, and contingency (see Table9 ). In principle, each criterion can be tested for by both theorem proving and model generation strate- gies, for example, a theorem is also sound if it holds in all models of the premise set, or a theory is consis- tent if the deductive closure of the premise set does not contain a contradiction.... ..."

### Table 4: Performance of the overall best theorem prover on individual classes of the RTE dataset.

2005

"... In PAGE 6: ... We report the raw accuracy and the con dence weighted score (CWS) in Table 3.5 Table4 shows the performance of the theorem prover split by RTE example class (as illustrated... In PAGE 7: ...3%.) Interestingly, the performance varies heavily by class, (see Table4 ), possibly indicating that some classes are inherently more dif cult. The baseline accuracy is close to random guessing, and the difference between our system performance and the baseline performance on the test set is statistically signi - cant (p lt; 0:02).... In PAGE 7: ... Since our logical formu- lae essentially restate the information in the dependency graph, our abductive inference and learning algorithms are not tied to the logical representation; in particular, the inference algorithm can be modi ed to work with these graph-based representations, where it can be interpreted as a graph-matching procedure that prefers globally consis- tent matchings. Table4 shows that certain classes require more effort in linguistic modeling, and improvements in those classes can lead to great overall gains in performance. The current rep- resentation fails to capture some important interactions in its dependencies (e.... ..."

Cited by 14

### Table 4: Performance of the overall best theorem prover on individual classes of the RTE dataset.

2005

"... In PAGE 6: ... We report the raw accuracy and the confidence weighted score (CWS) in Table 3.5 Table4 shows the performance of the theorem prover split by RTE example class (as illustrated... In PAGE 7: ...3%.) Interestingly, the performance varies heavily by class, (see Table4 ), possibly indicating that some classes are inherently more difficult. The baseline accuracy is close to random guessing, and the difference between our system performance and the baseline performance on the test set is statistically signifi- cant (p lt; 0.... In PAGE 7: ... Since our logical formu- lae essentially restate the information in the dependency graph, our abductive inference and learning algorithms are not tied to the logical representation; in particular, the inference algorithm can be modified to work with these graph-based representations, where it can be interpreted as a graph-matching procedure that prefers globally consis- tent matchings. Table4 shows that certain classes require more effort in linguistic modeling, and improvements in those classes can lead to great overall gains in performance. The current rep- resentation fails to capture some important interactions in its dependencies (e.... ..."

Cited by 14

### Table 1: Consistency checking for theorem provers and model builders

1999

"... In PAGE 10: ... Model building o ers a partial solution to this problem: as well as calling the theorem prover with input : , simultaneously call the model builder with input . In practice, this should successfully deal with many of the formulas the theorem prover can apos;t handle, as is shown in Table1 . Here the top row lists possible responses from the theorem prover to : , while the left hand column lists possible responses of the model builder to .... ..."

Cited by 16

### Table 1: Example of Resolution Principle.

"... In PAGE 3: ... The last property can be extended to rst-order formulas by substituting appropriate terms for the variables of the rst-order predicates, so that a literal and its complement can be derived, and then the ground resolution rule can be applied [4]. Table1 presents a simple example of the application of the resolution principle. 3 Classi cation of Automated Theorem Provers There is one classi cation schema for automated theorem provers, which is based in the existence or not of interaction with the user [3].... ..."

### Table 1: How Theorem Provers Store Theorems This paper proposes a method by which a theorem prover can use digital signatures to detect any modi cations made to a theorem while outside the system. Using this method it is no longer necessary to store theorems toget- her with their proofs in order to ensure the security of the theorem prover. Furthermore it is perfectly safe to store theorems in a documented format. These two features make the method an ideal basis for exchanging results between di erent proof tools.

1996

"... In PAGE 3: ... Furthermore, this technique can not be used to share results with other systems, since proofs can usually be checked only by the system in which they were developed. Table1 pre- sents a summary of the methods by which various theorem provers store and reuse results.... ..."

### Table 4. Experiments with cooperating theorem provers

"... In PAGE 7: ...s in Section 3.2. All in all, we tackled 81 provable problems. Results can be found in Table4 . Results of SPASS, SETHEO using the weighted depth bound (SETHEO wd), and SETHEO using the depth bound (SETHEO d) are displayed in columns 2{4.... In PAGE 7: ...Table 4. Experiments with cooperating theorem provers Table4 reveals the high potential of cooperation. The number of solved problems could be increased, additionally the runtimes could be decreased.... ..."

### Table 3. An exercise using an external theorem prover

2003

Cited by 6