Results 1  10
of
20
Recognising textual entailment with logical inference
 In EMNLP05
, 2005
"... We use logical inference techniques for recognising textual entailment. As the performance of theorem proving turns out to be highly dependent on not readily available background knowledge, we incorporate model building, a technique borrowed from automated reasoning, and show that it is a useful rob ..."
Abstract

Cited by 57 (0 self)
 Add to MetaCart
We use logical inference techniques for recognising textual entailment. As the performance of theorem proving turns out to be highly dependent on not readily available background knowledge, we incorporate model building, a technique borrowed from automated reasoning, and show that it is a useful robust method to approximate entailment. Finally, we use machine learning to combine these deep semantic analysis techniques with simple shallow word overlap; the resulting hybrid model achieves high accuracy on the RTE testset, given the state of the art. Our results also show that the different techniques that we employ perform very differently on some of the subsets of the RTE corpus and as a result, it is useful to use the nature of the dataset as a feature. 1
Towards widecoverage semantic interpretation
 In Proceedings of Sixth International Workshop on Computational Semantics IWCS6
, 2005
"... Widecoverage and robust NLP techniques always seemed to go hand in hand with shallow analyses. This was certainly true a couple of years ago, ..."
Abstract

Cited by 46 (5 self)
 Add to MetaCart
Widecoverage and robust NLP techniques always seemed to go hand in hand with shallow analyses. This was certainly true a couple of years ago,
Nitpick: A counterexample generator for higherorder logic based on a relational model finder (Extended Abstract)
 IN TAP 2009: SHORT PAPERS, ETH
, 2009
"... ..."
Relational analysis of algebraic datatypes
 In Joint 10th European Software Engineering Conference (ESEC) and 13th ACM SIGSOFT Symposium on the Foundations of Software Engineering (FSE
, 2005
"... We present a technique that enables the use of finite model finding to check the satisfiability of certain formulas whose intended models are infinite. Such formulas arise when using the language of sets and relations to reason about structured values such as algebraic datatypes. The key idea of our ..."
Abstract

Cited by 20 (2 self)
 Add to MetaCart
We present a technique that enables the use of finite model finding to check the satisfiability of certain formulas whose intended models are infinite. Such formulas arise when using the language of sets and relations to reason about structured values such as algebraic datatypes. The key idea of our technique is to identify a natural syntactic class of formulas in relational logic for which reasoning about infinite structures can be reduced to reasoning about finite structures. As a result, when a formula belongs to this class, we can use existing finite model finding tools to check whether the formula holds in the desired infinite model. 1
On Decision Procedures for SetValued Fields
, 2004
"... An important feature of objectoriented programming languages is the ability to dynamically instantiate userdefined container data structures such as lists, trees, and hash tables. Programs implement such data structures using references to dynamically allocated objects, which allows data structure ..."
Abstract

Cited by 19 (13 self)
 Add to MetaCart
An important feature of objectoriented programming languages is the ability to dynamically instantiate userdefined container data structures such as lists, trees, and hash tables. Programs implement such data structures using references to dynamically allocated objects, which allows data structures to store unbounded numbers of objects, but makes reasoning about programs more difficult. Reasoning about objectoriented programs with complex data structures is simplified if data structure operations are specified in terms of abstract sets of objects associated with each data structure. For example, an insertion into a data structure in this approach becomes simply an insertion into a dynamically changing setvalued field of an object, as opposed to a manipulation of a dynamically linked structure linked to the object. In this paper we explore...
Combining Shallow and Deep NLP Methods for Recognizing Textual Entailment
 In Proc. of the PASCAL RTE Challenge
, 2005
"... We combine two methods to tackle the textual entailment challenge: a shallow method based on word overlap and a deep method using theorem proving techniques. We use a machine learning technique to combine features derived from both methods. We submitted two runs, one using all features, yielding an ..."
Abstract

Cited by 16 (0 self)
 Add to MetaCart
We combine two methods to tackle the textual entailment challenge: a shallow method based on word overlap and a deep method using theorem proving techniques. We use a machine learning technique to combine features derived from both methods. We submitted two runs, one using all features, yielding an accuracy of 0.5625, and one using only the shallow feature, with an accuracy of 0.5550. Our method currently suffers from a lack of background knowledge and future work will be focussed on that area. 1
Reasoning support for expressive ontology languages using a theorem prover
 In FoIKS
, 2006
"... Abstract. It is claimed in [45] that firstorder theorem provers are not efficient for reasoning with ontologies based on description logics compared to specialised description logic reasoners. However, the development of more expressive ontology languages requires the use of theorem provers able to ..."
Abstract

Cited by 11 (0 self)
 Add to MetaCart
Abstract. It is claimed in [45] that firstorder theorem provers are not efficient for reasoning with ontologies based on description logics compared to specialised description logic reasoners. However, the development of more expressive ontology languages requires the use of theorem provers able to reason with full firstorder logic and even its extensions. So far, theorem provers have extensively been used for running experiments over TPTP containing mainly problems with relatively small axiomatisations. A question arises whether such theorem provers can be used to reason in real time with large axiomatisations used in expressive ontologies such as SUMO. In this paper we answer this question affirmatively by showing that a carefully engineered theorem prover can answer queries to ontologies having over 15,000 firstorder axioms with equality. Ontologies used in our experiments are based on the language KIF, whose expressive power goes far beyond the description logic based languages currently used in the Semantic Web.
Monotonicity Inference for HigherOrder Formulas
, 2010
"... Formulas are often monotonic in the sense that if the formula is satisfiable for given domains of discourse, it is also satisfiable for all larger domains. Monotonicity is undecidable in general, but we devised two calculi that infer it in many cases for higherorder logic. The stronger calculus has ..."
Abstract

Cited by 9 (8 self)
 Add to MetaCart
Formulas are often monotonic in the sense that if the formula is satisfiable for given domains of discourse, it is also satisfiable for all larger domains. Monotonicity is undecidable in general, but we devised two calculi that infer it in many cases for higherorder logic. The stronger calculus has been implemented in Isabelle’s model finder Nitpick, where it is used to prune the search space, leading to dramatic speed improvements for formulas involving many atomic types.
Recognising textual entailment with robust logical inference
 MLCW 2005, volume LNAI 3944
, 2006
"... Abstract. We use logical inference techniques for recognising textual entailment, with theorem proving operating on deep semantic interpretations as the backbone of our system. However, the performance of theorem proving on its own turns out to be highly dependent on a wide range of background knowl ..."
Abstract

Cited by 8 (0 self)
 Add to MetaCart
Abstract. We use logical inference techniques for recognising textual entailment, with theorem proving operating on deep semantic interpretations as the backbone of our system. However, the performance of theorem proving on its own turns out to be highly dependent on a wide range of background knowledge, which is not necessarily included in publically available knowledge sources. Therefore, we achieve robustness via two extensions. Firstly, we incorporate model building, a technique borrowed from automated reasoning, and show that it is a useful robust method to approximate entailment. Secondly, we use machine learning to combine these deep semantic analysis techniques with simple shallow word overlap. The resulting hybrid model achieves high accuracy on the RTE testset, given the state of the art. Our results also show that the various techniques that we employ perform very differently on some of the subsets of the RTE corpus and as a result, it is useful to use the nature of the dataset as a feature. 1
Encoding Industrial Hardware Verification Problems into Effectively Propositional Logic
"... Abstract—Wordlevel bounded model checking and equivalence checking problems are naturally encoded in the theory of bitvectors and arrays. The standard practice of deciding formulas of such theories in the hardware industry is either SAT (using bitblasting) or SMTbased methods. These methods per ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
Abstract—Wordlevel bounded model checking and equivalence checking problems are naturally encoded in the theory of bitvectors and arrays. The standard practice of deciding formulas of such theories in the hardware industry is either SAT (using bitblasting) or SMTbased methods. These methods perform reasoning on a low level but perform it very efficiently. To find alternative potentially promising model checking and equivalence checking methods, a natural idea is to lift reasoning from the bit and bitvector levels to higher levels. In such an attempt, in [14] we proposed translating memory designs into the Effectively PRopositional (EPR) fragment of firstorder logic. The first experiments with using such a translation have been encouraging but raised some questions. Since the highlevel encoding we used was incomplete (yet avoiding bitblasting) some equivalences could not be proved. Another problem was that there was no natural correspondence between models of EPR formulas and bitvector based models that would demonstrate nonequivalence and hence design errors. This paper addresses these problems by providing more refined translations of equivalence checking problems arising from hardware verification into EPR formulas. We provide three such translations and formulate their properties. All three translations are designed in such a way that models of EPR problems can be translated into bitvector models demonstrating nonequivalence. We also evaluate the best EPR solvers on industrial equivalence checking problems and compare them with SMT solvers designed and tuned for such formulas specifically. We present empirical evidence demonstrating that EPRbased methods and solvers are competitive. I.