Results 1  10
of
87
Algorithms for the Satisfiability (SAT) Problem: A Survey
 DIMACS Series in Discrete Mathematics and Theoretical Computer Science
, 1996
"... . The satisfiability (SAT) problem is a core problem in mathematical logic and computing theory. In practice, SAT is fundamental in solving many problems in automated reasoning, computeraided design, computeraided manufacturing, machine vision, database, robotics, integrated circuit design, compute ..."
Abstract

Cited by 145 (3 self)
 Add to MetaCart
(Show Context)
. The satisfiability (SAT) problem is a core problem in mathematical logic and computing theory. In practice, SAT is fundamental in solving many problems in automated reasoning, computeraided design, computeraided manufacturing, machine vision, database, robotics, integrated circuit design, computer architecture design, and computer network design. Traditional methods treat SAT as a discrete, constrained decision problem. In recent years, many optimization methods, parallel algorithms, and practical techniques have been developed for solving SAT. In this survey, we present a general framework (an algorithm space) that integrates existing SAT algorithms into a unified perspective. We describe sequential and parallel SAT algorithms including variable splitting, resolution, local search, global optimization, mathematical programming, and practical SAT algorithms. We give performance evaluation of some existing SAT algorithms. Finally, we provide a set of practical applications of the sat...
Acquiring SearchControl Knowledge via Static Analysis
 Artificial Intelligence
, 1993
"... ExplanationBased Learning (EBL) is a widelyused technique for acquiring searchcontrol knowledge. Recently, Prieditis, van Harmelen, and Bundy pointed to the similarity between Partial Evaluation (PE) and EBL. However, EBL utilizes training examples whereas PE does not. It is natural to inquire, th ..."
Abstract

Cited by 102 (2 self)
 Add to MetaCart
(Show Context)
ExplanationBased Learning (EBL) is a widelyused technique for acquiring searchcontrol knowledge. Recently, Prieditis, van Harmelen, and Bundy pointed to the similarity between Partial Evaluation (PE) and EBL. However, EBL utilizes training examples whereas PE does not. It is natural to inquire, therefore, whether PE can be used to acquire searchcontrol knowledge, and if so at what cost? This paper answers these questions by means of a case study comparing prodigy/ebl, a stateoftheart EBL system, and static, a PEbased analyzer of problemspace definitions. When tested in prodigy/ebl's benchmark problem spaces, static generated searchcontrol knowledge that was up to three times as effective as the knowledge learned by prodigy/ebl, and did so from twentysix to seventyseven times faster. The paper describes static's algorithms, compares its performance to prodigy/ebl's, noting when static's superior performance will scale up and when it will not. The paper concludes with several le...
An essential hybrid reasoning system: knowledge and symbol level accounts of KRYPTON
 In Proceedings of the 9th International Joint Conference on Artificial Intelligence
, 1985
"... Hybrid inference systems are an important way to address the fact that intelligent systems have muiltifaceted representational and reasoning competence. KRYPTON is an experimental prototype that competently handles both terminological and assertional knowledge; these two kinds of information are tig ..."
Abstract

Cited by 82 (1 self)
 Add to MetaCart
Hybrid inference systems are an important way to address the fact that intelligent systems have muiltifaceted representational and reasoning competence. KRYPTON is an experimental prototype that competently handles both terminological and assertional knowledge; these two kinds of information are tightly linked by having sentences in an assertional component be formed using structured complex predicates denned in a complementary terminological component. KRYPTON is unique in that it combines in a completely integrated fashion a framebased description language and a firstorder resolution theoremprover. We give here both a formal Knowledge Level view of the user interface to KRYPTON and the technical Symbol Level details of the integration of the two disparate components, thus providing an essential picture of the abstract function that KRYPTON computes and the implementation technology needed to make it work. We also illustrate the kind of complex question the system can answer. I
Reactive Consistency Control in Deductive Databases
 ACM Transactions on Database Systems
, 1991
"... Classical treatment of consistency violations is to back out a database operation or transaction. In applications with large numbers of fairly complex consistency constraints this clearly is an unsatisfactory solution. Instead, if a violation is detected the user should be given a diagnosis of the c ..."
Abstract

Cited by 63 (6 self)
 Add to MetaCart
Classical treatment of consistency violations is to back out a database operation or transaction. In applications with large numbers of fairly complex consistency constraints this clearly is an unsatisfactory solution. Instead, if a violation is detected the user should be given a diagnosis of the constraints that failed, a line of reasoning on the cause that could have led to the violation, and suggestions for a repair. The problem is particularly complicated in a deductive database system where failures may be due to an inferred condition rather than simply a stored fact, but the repair can only be applied to the underlying facts. The paper presents a system which provides automated support in such situations. It concentrates on the concepts and ideas underlying the approach and an appropriate system architecture and user guidance, and sketches some of the heuristics used to gain in performance. 1 Introduction A database is called consistent if it is a truthful model of a given mini...
A Simplifier for Propositional Formulas with Many Binary Clauses
, 2001
"... Deciding whether a propositional formula in conjunctive normal form is satisfiable (SAT) is an NPcomplete problem. The problem becomes linear when the formula contains binary clauses only. Interestingly, the reduction to SAT of a number of wellknown and important problems  such as classical AI p ..."
Abstract

Cited by 61 (3 self)
 Add to MetaCart
Deciding whether a propositional formula in conjunctive normal form is satisfiable (SAT) is an NPcomplete problem. The problem becomes linear when the formula contains binary clauses only. Interestingly, the reduction to SAT of a number of wellknown and important problems  such as classical AI planning and automatic test pattern generation for circuits  yields formulas containing many binary clauses. In this paper we introduce and experiment with 2SIMPLIFY, a formula simplifier targeted at such problems. 2SIMPLIFY constructs the transitive closure of the implication graph corresponding to the binary clauses in the formula and uses this graph to deduce new unit literals. The deduced literals are used to simplify the formula and update the graph, and so on, until stabilization. Finally, we use the graph to construct an equivalent, simpler set of binary clauses. Experimental evaluation of this simplifier on a number of benchmark formulas produced by encoding AI planning problems prove 2SIMPLIFY to be a useful tool in many circumstances.
A Structural Theory of ExplanationBased Learning
 Artificial Intelligence
, 1992
"... The impact of ExplanationBased Learning (EBL) on problemsolving efficiency varies greatly from one problem space to another. In fact, seemingly minute modifications to problem space encoding can drastically alter EBL's impact. For example, while prodigy/ebl (a stateoftheart EBL system) si ..."
Abstract

Cited by 60 (4 self)
 Add to MetaCart
(Show Context)
The impact of ExplanationBased Learning (EBL) on problemsolving efficiency varies greatly from one problem space to another. In fact, seemingly minute modifications to problem space encoding can drastically alter EBL's impact. For example, while prodigy/ebl (a stateoftheart EBL system) significantly speeds up the prodigy problem solver in the Blocksworld, prodigy/ebl actually slows prodigy down in a representational variant of the Blocksworld constructed by adding a single, carefully chosen, macrooperator to the Blocksworld operator set. Although EBL has been tested experimentally, no theory has been put forth that accounts for such phenomena. This paper presents such a theory. The theory exhibits a correspondence between a graph representation of problem spaces and the proofs used by EBL systems to generate searchcontrol knowledge. The theory relies on this correspondence to account for the variations in EBL's impact. This account is validated by static, a program that extract...
Logic and Databases: a 20 Year Retrospective
, 1996
"... . At a workshop held in Toulouse, France in 1977, Gallaire, Minker and Nicolas stated that logic and databases was a field in its own right (see [131]). This was the first time that this designation was made. The impetus for this started approximately twenty years ago in 1976 when I visited Gallaire ..."
Abstract

Cited by 58 (1 self)
 Add to MetaCart
. At a workshop held in Toulouse, France in 1977, Gallaire, Minker and Nicolas stated that logic and databases was a field in its own right (see [131]). This was the first time that this designation was made. The impetus for this started approximately twenty years ago in 1976 when I visited Gallaire and Nicolas in Toulouse, France, which culminated in a workshop held in Toulouse, France in 1977. It is appropriate, then to provide an assessment as to what has been achieved in the twenty years since the field started as a distinct discipline. In this retrospective I shall review developments that have taken place in the field, assess the contributions that have been made, consider the status of implementations of deductive databases and discuss the future of work in this area. 1 Introduction As described in [234], the use of logic and deduction in databases started in the late 1960s. Prominent among the developments was the work by Levien and Maron [202, 203, 199, 200, 201] and Kuhns [1...
Computational Logic and Human Thinking: How to be Artificially Intelligent
, 2011
"... The mere possibility of Artificial Intelligence (AI) – of machines that can think and act as intelligently as humans – can generate strong emotions. While some enthusiasts are excited by the thought that one day machines may become more intelligent than people, many of its critics view such a prosp ..."
Abstract

Cited by 37 (10 self)
 Add to MetaCart
(Show Context)
The mere possibility of Artificial Intelligence (AI) – of machines that can think and act as intelligently as humans – can generate strong emotions. While some enthusiasts are excited by the thought that one day machines may become more intelligent than people, many of its critics view such a prospect with horror. Partly because these controversies attract so much attention, one of the most important accomplishments of AI has gone largely unnoticed: the fact that many of its advances can also be used directly by people, to improve their own human intelligence. Chief among these advances is Computational Logic. Computational Logic builds upon traditional logic, which was originally developed to help people think more effectively. It employs the techniques of symbolic logic, which has been used to build the foundations of mathematics and computing. However, compared with traditional logic, Computational Logic is much more powerful; and compared with symbolic logic, it is much simpler and more practical. Although the applications of Computational Logic in AI require the use of mathematical notation, its human applications do not. As a consequence, I have written the main part of this book informally, to reach as wide an audience as possible. Because human thinking is also the subject of study in many other fields, I have drawn upon related studies in Cognitive Psychology, Linguistics, Philosophy, Law, Management Science and English
The Consistent Labeling Problem: Part I
 IEEE Trans. Pattern Anal. Mach. Intell
, 1979
"... AbstractIn this first part of a twopart paper we introduce a general consistent labeling problem based on a unit constraint relation T containing Ntuples of units which constrain one another, and a compatibility relation R containing Ntuples of unitlabel pairs specifying which Ntuples of uni ..."
Abstract

Cited by 33 (3 self)
 Add to MetaCart
(Show Context)
AbstractIn this first part of a twopart paper we introduce a general consistent labeling problem based on a unit constraint relation T containing Ntuples of units which constrain one another, and a compatibility relation R containing Ntuples of unitlabel pairs specifying which Ntuples of units are compatible with which Ntuples of labels. We show that Latin square puzzles, finding Nary relations, graph or automata homomorphisms, graph colorings, as well as determining satisfiability of propositional logic statements and solving scene and edge labeling problems, are all special cases of the general consistent labeling problem. We then discuss the various approaches that researchers have used to speed up the tree search required to find consistent labelings. Each of these approaches uses a particular lookahead operator to help eliminate backtracking in the tree search. Finally, we define the 4KP twoparameter class of lookahead operators which includes, as special cases, the operators other researchers have used. Index TermsBacktracking, consistent labeling, graph coloring, homorphisms, isomorphisms, lookahead operators, matching, Nary relations, relaxation, scene analysis, subgraph, tree search. 1.
Afterthoughts on analogical representation
 In Proceedings of Theoretical Issues in Natural Language Processing
, 1975
"... relate some old philosophical issues about representation and reasoning to problems in Artificial Intelligence. A major theme of the paper was the importance of distinguishing "analogical " from "Fregean" representations. I still think the distinction is important, though perhaps ..."
Abstract

Cited by 33 (4 self)
 Add to MetaCart
relate some old philosophical issues about representation and reasoning to problems in Artificial Intelligence. A major theme of the paper was the importance of distinguishing "analogical " from "Fregean" representations. I still think the distinction is important, though perhaps not as important for current problems in A.I. as I used to think. In this paper I'll try to explain why. Throughout I'll use the term "representation " to refer to a more or less complex structure which has addressable and significant parts, and which as a whole is used to denote or refer to something else.