Results 1  10
of
22
Ontology Translation on the Semantic Web
 Journal of Data Semantics
, 2003
"... Abstract. Ontologies are a crucial tool for formally specifying the vocabulary and relationship of concepts used on the Semantic Web. In order to share information, agents that use different vocabularies must be able to translate data from one ontological framework to another. Ontology translation i ..."
Abstract

Cited by 62 (17 self)
 Add to MetaCart
Abstract. Ontologies are a crucial tool for formally specifying the vocabulary and relationship of concepts used on the Semantic Web. In order to share information, agents that use different vocabularies must be able to translate data from one ontological framework to another. Ontology translation is required when translating datasets, generating ontology extensions, and querying through different ontologies. OntoMerge, an online system for ontology merging and automated reasoning, can implement ontology translation with inputs and outputs in OWL or other web languages. The merge of two related ontologies is obtained by taking the union of the concepts and the axioms defining them, and then adding bridging axioms that relate their concepts. The resulting merged ontology then serves as an inferential medium within which translation can occur. Our internal representation, WebPDDL, is a strong typed firstorder logic language for web application. Using a uniform notation for all problems allows us to factor out syntactic and semantic translation problems, and focus on the latter. Syntactic translation is done by an automatic translator between WebPDDL and OWL or other web languages. Semantic translation is implemented using an inference engine (OntoEngine) which processes assertions and queries in WebPDDL syntax, running in either a datadriven (forward chaining) or demanddriven (backward chaining) way. 1
33 Basic Test Problems: A Practical Evaluation of Some Paramodulation Strategies
, 1996
"... Introduction Many researchers who study the theoretical aspects of inference systems believe that if inference rule A is complete and more restrictive than inference rule B, then the use of A will lead more quickly to proofs than will the use of B. The literature contains statements of the sort "ou ..."
Abstract

Cited by 24 (5 self)
 Add to MetaCart
Introduction Many researchers who study the theoretical aspects of inference systems believe that if inference rule A is complete and more restrictive than inference rule B, then the use of A will lead more quickly to proofs than will the use of B. The literature contains statements of the sort "our rule is complete and it heavily prunes the search space; therefore it is efficient". 2 These positions are highly questionable and indicate that the authors have little or no experience with the practical use of automated inference systems. Restrictive rules (1) can block short, easytofind proofs, (2) can block proofs involving simple clauses, the type of clause on which many practical searches focus, (3) can require weakening of redundancy control such as subsumption and demodulation, and (4) can require the use of complex checks in deciding whether such rules should be applied. The only way to determ
A Constraintbased Partial Evaluator for Functional Logic Programs and its Application
, 1998
"... The aim of this work is the development and application of a partial evaluation procedure for rewritingbased functional logic programs. Functional logic programming languages unite the two main declarative programming paradigms. The rewritingbased computational model extends traditional functional ..."
Abstract

Cited by 12 (0 self)
 Add to MetaCart
The aim of this work is the development and application of a partial evaluation procedure for rewritingbased functional logic programs. Functional logic programming languages unite the two main declarative programming paradigms. The rewritingbased computational model extends traditional functional programming languages by incorporating logical features, including logical variables and builtin search, into its framework. This work is the first to address the automatic specialisation of these functional logic programs. In particular, a theoretical framework for the partial evaluation of rewritingbased functional logic programs is defined and its correctness is established. Then, an algorithm is formalised which incorporates the theoretical framework for the procedure in a fully automatic technique. Constraint solving is used to represent additional information about the terms encountered during the transformation in order to improve the efficiency and size of the residual programs. ...
Aspects of Computational Logic
, 1998
"... In mathematics there exist powerful symbolic computation packages which have drawn considerable attention, also because of their easytouse interfaces and graphical capabilities. In computer logic, however, there were mainly complex tools for experts or purely didactic proof assistants. The Logics ..."
Abstract

Cited by 9 (0 self)
 Add to MetaCart
In mathematics there exist powerful symbolic computation packages which have drawn considerable attention, also because of their easytouse interfaces and graphical capabilities. In computer logic, however, there were mainly complex tools for experts or purely didactic proof assistants. The Logics Workbench LWB is an attempt to fill this gap in the area of propositional logics. On the one hand, the LWB is intended for being used as an educational tool, especially for nonclassical logics, and on the other hand as a programmable logic platform for more experienced users. The present thesis comprises three major parts: a system design, an empirical study, and a theoretical contribution. (a) Beside a general introduction to the LWB, we will present design aspects of its key components in the first part: the kernel, the parser, the programming language and the user interface. It is shown that the adopted solutions are adequate for doing logic on the computer. The design goal is to provid...
The Hot List Strategy
, 1997
"... Experimentation strongly suggests that, for attacking deep questions and hard problems with the assistance of an automated reasoning program, the more effective paradigms rely on the retention of deduced information. A significant obstacle ordinarily presented by such a paradigm is the deduction and ..."
Abstract

Cited by 9 (4 self)
 Add to MetaCart
Experimentation strongly suggests that, for attacking deep questions and hard problems with the assistance of an automated reasoning program, the more effective paradigms rely on the retention of deduced information. A significant obstacle ordinarily presented by such a paradigm is the deduction and retention of one or more needed conclusions whose complexity sharply delays their consideration. To mitigate the severity of the cited obstacle, I formulated and feature in this article the hot list strategy. The hot list strategy asks the researcher to choose, usually from among the input statements characterizing the problem under study, one or more statements that are conjectured to play a key role for assignment completion. The chosen statementsconjectured to merit revisiting, again and againare placed in an input list of statements, called the hot list. When an automated reasoning program has decided to retain a new conclusion Cbefore any other statement is chosen to initiat...
Conquering the Meredith Single Axiom
 J. Automated Reasoning
, 2000
"... For more than three and onehalf decades beginning in the early 1960s, a heavy emphasis on proof finding has been a key component of the Argonne paradigm, whose use has directly led to significant advances in automated reasoning and important contributions to mathematics and logic. The theorems t ..."
Abstract

Cited by 8 (5 self)
 Add to MetaCart
For more than three and onehalf decades beginning in the early 1960s, a heavy emphasis on proof finding has been a key component of the Argonne paradigm, whose use has directly led to significant advances in automated reasoning and important contributions to mathematics and logic. The theorems that have served well range from the trivial to the deep, even including some that corresponded to open questions. Often the paradigm asks for a theorem whose proof is in hand but that cannot be obtained in a fully automated manner by the program in use. The theorem whose hypothesis consists solely of the Meredith single axiom for twovalued sentential (or propositional) calculus and whose conclusion is the Lukasiewicz threeaxiom system for that area of formal logic was just such a theorem. Featured in this article is the methodology that enabled the program OTTER to find the first fully automated proof of the cited theorem, a proof with the intriguing property that none of its steps...
Automating the search for elegant proofs
 J. Automated Reasoning
"... The research reported in this article was spawned by a colleague’s request to find an elegant proof (of a theorem from Boolean algebra) to replace his proof consisting of 816 deduced steps. The request was met by finding a proof consisting of 100 deduced steps. The methodology used to obtain the far ..."
Abstract

Cited by 8 (5 self)
 Add to MetaCart
The research reported in this article was spawned by a colleague’s request to find an elegant proof (of a theorem from Boolean algebra) to replace his proof consisting of 816 deduced steps. The request was met by finding a proof consisting of 100 deduced steps. The methodology used to obtain the far shorter proof is presented in detail through a sequence of experiments. Although clearly not an algorithm, the methodology is sufficiently general to enable its use for seeking elegant proofs regardless of the domain of study. In addition to (usually) being more elegant, shorter proofs often provide the needed path to constructing a more efficient circuit, a more effective algorithm, and the like. The methodology relies heavily on the assistance of McCune’s automated reasoning program OTTER. Of the aspects of proof elegance, the main focus here is on proof length, with brief attention paid to the type of term present, the number of variables required, and the complexity of deduced steps. The methodology is iterative, relying heavily on the use of three strategies: the resonance strategy, the hot list strategy, and McCune’s ratio strategy. These strategies, as well as other features on which the methodology relies, do exhibit tendencies that can be exploited in the search for shorter proofs and for other objectives. To provide some insight regarding the value of the methodology, I discuss its successful application to
Computers, Reasoning and Mathematical Practice
"... ion in itself is not the goal: for Whitehead [117]"it is the large generalisation, limited by a happy particularity, which is the fruitful conception." As an example consider the theorem in ring theory, which states that if R is a ring, f(x) is a polynomial over R and f(r) = 0 for every element of ..."
Abstract

Cited by 6 (2 self)
 Add to MetaCart
ion in itself is not the goal: for Whitehead [117]"it is the large generalisation, limited by a happy particularity, which is the fruitful conception." As an example consider the theorem in ring theory, which states that if R is a ring, f(x) is a polynomial over R and f(r) = 0 for every element of r of R then R is commutative. Special cases of this, for example f(x) is x 2 \Gamma x or x 3 \Gamma x, can be given a first order proof in a few lines of symbol manipulation. The usual proof of the general result [20] (which takes a semester's postgraduate course to develop from scratch) is a corollary of other results: we prove that rings satisfying the condition are semisimple artinian, apply a theorem which shows that all such rings are matrix rings over division rings, and eventually obtain the result by showing that all finite division rings are fields, and hence commutative. This displays von Neumann's architectural qualities: it is "deep" in a way in which the symbol manipulati...
The power of combining resonance with heat
 J. Automated Reasoning
, 1996
"... In this article, I present experimental evidence of the value of combining two strategies each of which has proved powerful in various contexts. The resonance strategy gives preference (for directing a program’s reasoning) to equations or formulas that have the same shape (ignoring variables) as one ..."
Abstract

Cited by 6 (5 self)
 Add to MetaCart
In this article, I present experimental evidence of the value of combining two strategies each of which has proved powerful in various contexts. The resonance strategy gives preference (for directing a program’s reasoning) to equations or formulas that have the same shape (ignoring variables) as one of the patterns supplied by the researcher to be used as a resonator. The hot list strategy rearranges the order in which conclusions are drawn, the rearranging caused by immediately visiting and, depending on the value of the heat parameter, even immediately revisiting a set of input statements chosen by the researcher; the chosen statements are used to complete applications of inference rules rather than to initiate them. Combining these two strategies often enables an automated reasoning program to attack deep questions and hard problems with far more effectiveness than using either alone. The use of this combination in the context of cursory proof checking produced most unexpected and satisfying results, as I show here. I present the material (including commentary) in the spirit of excerpts from an experimenter’s notebook, thus meeting the frequent request to illustrate how a researcher can make wise choices from among the numerous options offered by McCune’s automated reasoning program OTTER. I include challenges and topics for research and, to aid the researcher, in the Appendix a sample input