Results 1  10
of
51
A Survey on Knowledge Compilation
, 1998
"... this paper we survey recent results in knowledge compilation of propositional knowledge bases. We first define and limit the scope of such a technique, then we survey exact and approximate knowledge compilation methods. We include a discussion of compilation for nonmonotonic knowledge bases. Keywor ..."
Abstract

Cited by 100 (3 self)
 Add to MetaCart
this paper we survey recent results in knowledge compilation of propositional knowledge bases. We first define and limit the scope of such a technique, then we survey exact and approximate knowledge compilation methods. We include a discussion of compilation for nonmonotonic knowledge bases. Keywords: Knowledge Representation, Efficiency of Reasoning
Controlled Integrations of the Cut Rule into Connection Tableau Calculi
"... In this paper techniques are developed and compared which increase the inferential power of tableau systems for classical firstorder logic. The mechanisms are formulated in the framework of connection tableaux, which is an amalgamation of the connection method and the tableau calculus, and a genera ..."
Abstract

Cited by 62 (3 self)
 Add to MetaCart
In this paper techniques are developed and compared which increase the inferential power of tableau systems for classical firstorder logic. The mechanisms are formulated in the framework of connection tableaux, which is an amalgamation of the connection method and the tableau calculus, and a generalization of model elimination. Since connection tableau calculi are among the weakest proof systems with respect to proof compactness, and the (backward) cut rule is not suitable for the firstorder case, we study alternative methods for shortening proofs. The techniques we investigate are the folding up and the folding down operation. Folding up represents an efficient way of supporting the basic calculus, which is topdown oriented, with lemmata derived in a bottomup manner. It is shown that both techniques can also be viewed as controlled integrations of the cut rule. In order to remedy the additional redundancy imported into tableau proof procedures by the new inference rules, we develop and apply an extension of the regularity condition on tableaux and the mechanism of antilemmata which realizes a subsumption concept on tableaux. Using the framework of the theorem prover SETHEO, we have implemented three new proof procedures which overcome the deductive weakness of cutfree tableau systems. Experimental results demonstrate the superiority of the systems with folding up over the cutfree variant and the one with folding down.
Firstorder proof tactics in higherorder logic theorem provers
 Design and Application of Strategies/Tactics in Higher Order Logics, number NASA/CP2003212448 in NASA Technical Reports
, 2003
"... Abstract. In this paper we evaluate the effectiveness of firstorder proof procedures when used as tactics for proving subgoals in a higherorder logic interactive theorem prover. We first motivate why such firstorder proof tactics are useful, and then describe the core integrating technology: an ‘ ..."
Abstract

Cited by 51 (4 self)
 Add to MetaCart
(Show Context)
Abstract. In this paper we evaluate the effectiveness of firstorder proof procedures when used as tactics for proving subgoals in a higherorder logic interactive theorem prover. We first motivate why such firstorder proof tactics are useful, and then describe the core integrating technology: an ‘LCFstyle’ logical kernel for clausal firstorder logic. This allows the choice of different logical mappings between higherorder logic and firstorder logic to be used depending on the subgoal, and also enables several different firstorder proof procedures to cooperate on constructing the proof. This work was carried out using the HOL4 theorem prover; we comment on the ease of transferring the technology to other higherorder logic theorem provers. 1
Distributed Prefetchbuffer/Cache Design for High Performance Memory Systems
"... Microprocessor execution speeds are improving at a rate of 50%80% per year while DRAM access times are improving at a much lower rate of 5%10% per year. Computer systems are rapidly approaching the point at which overall system performance is determined not by the speed of the CPU but by the memor ..."
Abstract

Cited by 30 (1 self)
 Add to MetaCart
Microprocessor execution speeds are improving at a rate of 50%80% per year while DRAM access times are improving at a much lower rate of 5%10% per year. Computer systems are rapidly approaching the point at which overall system performance is determined not by the speed of the CPU but by the memory system speed. We present a high performance memory system architecture that overcomes the growing speed disparity between high performance microprocessors and current generation DRAMs. A novel prediction and prefetching technique is combined with a distributed cache architecture to build a high performance memory system. We use a table based prediction scheme with a prediction cache to prefetch data from the onchip DRAM array to an onchip SRAM prefetch buffer. By prefetching data we are able to hide the large latency associated with DRAM access and cycle times. Our experiments show that with a small (32 KB) prediction cache we can get an effective main memory access time that is close to the access time of larger secondary caches.
Model Elimination without Contrapositives and its Application to PTTP
 PROCEEDINGS OF CADE12, SPRINGER LNAI 814
, 1994
"... We give modifications of model elimination which do not necessitate the use of contrapositives. These restart model elimination calculi are proven sound and complete and their implementation by PTTP is depicted. The corresponding proof procedures are evaluated by a number of runtime experiments and ..."
Abstract

Cited by 22 (8 self)
 Add to MetaCart
We give modifications of model elimination which do not necessitate the use of contrapositives. These restart model elimination calculi are proven sound and complete and their implementation by PTTP is depicted. The corresponding proof procedures are evaluated by a number of runtime experiments and they are compared to other well known provers. Finally we relate our results to other calculi, namely the connection method, modified problem reduction format and NearHorn Prolog.
Optimizing proof search in model elimination
 13th International Conference on Automated Deduction, volume 1104 of Lecture Notes in Computer Science
, 1996
"... Many implementations of model elimination perform proof search by iteratively increasing a bound on the total size of the proof. We propose an optimized version of this search mode using a simple divideandconquer refinement. Optimized and unoptimized modes are compared, together with depthbounded ..."
Abstract

Cited by 20 (2 self)
 Add to MetaCart
Many implementations of model elimination perform proof search by iteratively increasing a bound on the total size of the proof. We propose an optimized version of this search mode using a simple divideandconquer refinement. Optimized and unoptimized modes are compared, together with depthbounded and bestfirst search, over the entire TPTP problem library. The optimized sizebounded mode seems to be the overall winner, but for each strategy there are problems on which it performs best. Some attempt is made to analyze why. We emphasize that our optimization, and other implementation techniques like caching, are rather general: they are not dependent on the details of model elimination, or even that the search is concerned with theorem proving. As such, we believe that this study is a useful complement to research on extending the model elimination calculus.
Model Elimination without Contrapositives
, 1994
"... We present modifications of model elimination which do not necessitate the use of contrapositives. These restart model elimination calculi are proven sound and complete. The corresponding proof procedures are evaluated by a number of runtime experiments and they are compared to other well known pro ..."
Abstract

Cited by 16 (6 self)
 Add to MetaCart
(Show Context)
We present modifications of model elimination which do not necessitate the use of contrapositives. These restart model elimination calculi are proven sound and complete. The corresponding proof procedures are evaluated by a number of runtime experiments and they are compared to other well known provers. Finally we relate our results to other calculi, namely the connection method, modified problem reduction format and NearHorn Prolog.
The use of lemmas in the model elimination procedure
 Journal of Automated Reasoning
, 1997
"... When the Model Elimination (ME) procedure was rst proposed, a notion of lemma was put forth as a promising augmentation to the basic complete proof procedure. Here the lemmas that are used are also discovered by the procedure in the same proof run. Several implementations of ME now exist but only a ..."
Abstract

Cited by 14 (0 self)
 Add to MetaCart
(Show Context)
When the Model Elimination (ME) procedure was rst proposed, a notion of lemma was put forth as a promising augmentation to the basic complete proof procedure. Here the lemmas that are used are also discovered by the procedure in the same proof run. Several implementations of ME now exist but only a 1970's implementation explicitly examined this lemma mechanism, with indi erent results. We report on the successful use of lemmas using the METEOR implementation of ME. Not only does the lemma device permit METEOR to obtain proofs not otherwise obtainable by METEOR, or any other ME prover not using lemmas, but some wellknown challenge problems are solved. We discuss several of these more di cult problems, including two challenge problems for uniform generalpurpose provers, where METEOR was rst in obtaining the proof. The problems are not selected simply to show o the lemma device, but rather to understand it better. Thus, we choose problems with widely di erent characteristics, including one where very few lemmas are created automatically, the opposite of normal behavior. This selection points out the potential of, and the problems with, lemma use. The biggest problem normally is the selection of appropriate lemmas to retain from the large number generated. 1
A Caching Mechanism for Semantic Web Service Discovery
 In Proc. of the 6th International Semantic Web Conference (ISWC 2007), Busan, South Korea
, 2007
"... Abstract. The discovery of suitable Web services for a given task is one of the central operations in Serviceoriented Architectures (SOA), and research on Semantic Web services (SWS) aims at automating this step. For the large amount of available Web services that can be expected in realworld sett ..."
Abstract

Cited by 13 (3 self)
 Add to MetaCart
(Show Context)
Abstract. The discovery of suitable Web services for a given task is one of the central operations in Serviceoriented Architectures (SOA), and research on Semantic Web services (SWS) aims at automating this step. For the large amount of available Web services that can be expected in realworld settings, the computational costs of automated discovery based on semantic matchmaking become important. To make a discovery engine a reliable software component, we must thus aim at minimizing both the mean and the variance of the duration of the discovery task. For this, we present an extension for discovery engines in SWS environments that exploits structural knowledge and previous discovery results for reducing the search space of consequent discovery operations. Our prototype implementation shows significant improvements when applied to the Stanford SWS Challenge scenario and dataset. 1
SETHEO V3.2: Recent Developments  System Abstract
 12TH INT. CONF. ON AUTOMATED DEDUCTION, CADE12, SPRINGER LNCS 814
, 1994
"... ..."
(Show Context)