Results 1  10
of
48
The DLV System for Knowledge Representation and Reasoning
 ACM Transactions on Computational Logic
, 2002
"... Disjunctive Logic Programming (DLP) is an advanced formalism for knowledge representation and reasoning, which is very expressive in a precise mathematical sense: it allows to express every property of finite structures that is decidable in the complexity class ΣP 2 (NPNP). Thus, under widely believ ..."
Abstract

Cited by 320 (78 self)
 Add to MetaCart
Disjunctive Logic Programming (DLP) is an advanced formalism for knowledge representation and reasoning, which is very expressive in a precise mathematical sense: it allows to express every property of finite structures that is decidable in the complexity class ΣP 2 (NPNP). Thus, under widely believed assumptions, DLP is strictly more expressive than normal (disjunctionfree) logic programming, whose expressiveness is limited to properties decidable in NP. Importantly, apart from enlarging the class of applications which can be encoded in the language, disjunction often allows for representing problems of lower complexity in a simpler and more natural fashion. This paper presents the DLV system, which is widely considered the stateoftheart implementation of disjunctive logic programming, and addresses several aspects. As for problem solving, we provide a formal definition of its kernel language, functionfree disjunctive logic programs (also known as disjunctive datalog), extended by weak constraints, which are a powerful tool to express optimization problems. We then illustrate the usage of DLV as a tool for knowledge representation and reasoning, describing a new declarative programming methodology which allows one to encode complex problems (up to ∆P 3complete problems) in a declarative fashion. On the foundational side, we provide a detailed analysis of the computational complexity of the language of
Testing Heuristics: We Have It All Wrong
 Journal of Heuristics
, 1995
"... The competitive nature of most algorithmic experimentation is a source of problems that are all too familiar to the research community. It is hard to make fair comparisons between algorithms and to assemble realistic test problems. Competitive testing tells us which algorithm is faster but not w ..."
Abstract

Cited by 118 (2 self)
 Add to MetaCart
The competitive nature of most algorithmic experimentation is a source of problems that are all too familiar to the research community. It is hard to make fair comparisons between algorithms and to assemble realistic test problems. Competitive testing tells us which algorithm is faster but not why. Because it requires polished code, it consumes time and energy that could be spent doing more experiments. This paper argues that a more scientific approach of controlled experimentation, similar to that used in other empirical sciences, avoids or alleviates these problems. We have confused research and development; competitive testing is suited only for the latter. Most experimental studies of heuristic algorithms resemble track meets more than scientific endeavors. Typically an investigator has a bright idea for a new algorithm and wants to show that it works better, in some sense, than known algorithms. This requires computational tests, perhaps on a standard set of benchmark p...
The Constrainedness of Search
 In Proceedings of AAAI96
, 1999
"... We propose a definition of `constrainedness' that unifies two of the most common but informal uses of the term. These are that branching heuristics in search algorithms often try to make the most "constrained" choice, and that hard search problems tend to be "critically constrained". Our definition ..."
Abstract

Cited by 116 (26 self)
 Add to MetaCart
We propose a definition of `constrainedness' that unifies two of the most common but informal uses of the term. These are that branching heuristics in search algorithms often try to make the most "constrained" choice, and that hard search problems tend to be "critically constrained". Our definition of constrainedness generalizes a number of parameters used to study phase transition behaviour in a wide variety of problem domains. As well as predicting the location of phase transitions in solubility, constrainedness provides insight into why problems at phase transitions tend to be hard to solve. Such problems are on a constrainedness "knifeedge", and we must search deep into the problem before they look more or less soluble. Heuristics that try to get off this knifeedge as quickly as possible by, for example, minimizing the constrainedness are often very effective. We show that heuristics from a wide variety of problem domains can be seen as minimizing the constrainedness (or proxies ...
Problem Structure in the Presence of Perturbations
 In Proceedings of the 14th National Conference on AI
, 1997
"... Recent progress on search and reasoning procedures has been driven by experimentation on computationally hard problem instances. Hard random problem distributions are an important source of such instances. Challenge problems from the area of finite algebra have also stimulated research on searc ..."
Abstract

Cited by 70 (17 self)
 Add to MetaCart
Recent progress on search and reasoning procedures has been driven by experimentation on computationally hard problem instances. Hard random problem distributions are an important source of such instances. Challenge problems from the area of finite algebra have also stimulated research on search and reasoning procedures. Nevertheless, the relation of such problems to practical applications is somewhat unclear. Realistic problem instances clearly have more structure than the random problem instances, but, on the other hand, they are not as regular as the structured mathematical problems. We propose a new benchmark domain that bridges the gap between the purely random instances and the highly structured problems, by introducing perturbations into a structured domain. We will show how to obtain interesting search problems in this manner, and how such problems can be used to study the robustness of search control mechanisms. Our experiments demonstrate that the performan...
Implementing and Testing Expressive Description Logics: a Preliminary Report
, 1995
"... The aim of the crack project is the research and the development of a knowledge representation architecture based on description logics. The crack system is different from other knowledge representation systems for the high expressivity of the language and the possibility of having sound and com ..."
Abstract

Cited by 52 (5 self)
 Add to MetaCart
The aim of the crack project is the research and the development of a knowledge representation architecture based on description logics. The crack system is different from other knowledge representation systems for the high expressivity of the language and the possibility of having sound and complete reasoning procedures. With respect to other systems available in the research community, crack is more expressive, it is expandable to new constructs, it treats the conceptual and individual levels in a homogeneous way, it is modular, and it is comparably fast. However, crack algorithms are not optimal in the worst cases, e.g. in some (arguably rare in practice) worst cases they may require exponential memory. The performance of the system has been tested against several different classes of random knowledge bases, characterized by an order parameter generating phase transitions in the satisfiability probability space.
Phase Transitions and Annealed Theories: Number Partitioning as a Case Study
 In Proceedings of ECAI96
, 1996
"... . We outline a technique for studying phase transition behaviour in computational problems using number partitioning as a case study. We first build an "annealed" theory that assumes independence between parts of the number partition problem. Using this theory, we identify a parameter which represen ..."
Abstract

Cited by 30 (9 self)
 Add to MetaCart
. We outline a technique for studying phase transition behaviour in computational problems using number partitioning as a case study. We first build an "annealed" theory that assumes independence between parts of the number partition problem. Using this theory, we identify a parameter which represents the "constrainedness" of a problem. We determine experimentally the critical value of this parameter at which a rapid transition between soluble and insoluble problems occurs. Finitesize scaling methods developed in statistical mechanics describe the behaviour around the critical value. We identify phase transition behaviour in both the decision and optimization versions of number partitioning, in the size of the optimal partition, and in the quality of heuristic solutions. This case study demonstrates how annealed theories and finitesize scaling allows us to compare algorithms and heuristics in a precise and quantitative manner. 1 Introduction Phase transition behaviour has recently r...
Scaling Effects in the CSP Phase Transition
, 1995
"... Phase transitions in constraint satisfaction problems (CSP's) are the subject of intense study. We identify an order parameter for random binary CSP's. There is a rapid transition in the probability of a CSP having a solution at a critical value of this parameter. The order parameter allows differen ..."
Abstract

Cited by 27 (16 self)
 Add to MetaCart
Phase transitions in constraint satisfaction problems (CSP's) are the subject of intense study. We identify an order parameter for random binary CSP's. There is a rapid transition in the probability of a CSP having a solution at a critical value of this parameter. The order parameter allows different phase transition behaviour to be compared in an uniform manner, for example CSP's generated under different regimes. We then show that within classes, the scaling of behaviour can be modelled by a tehnique called "finite size scaling". This applies not only to probability of solubility, as has been observed before in other NPproblems, but also to search cost, the first time this has been observed. Furthermore, the technique applies with equal validity to several different methods of varying problem size. As well as contributing to the understanding of phase transitions, we contribute by allowing much finer grained comparison of algorithms, and for accurate empirical extrapolations of beha...
Relational Learning as Search in a Critical Region
 Journal of Machine Learning Research
, 2003
"... Machine learning strongly relies on the covering test to assess whether a candidate hypothesis covers training examples. The present paper investigates learning relational concepts from examples, termed relational learning or inductive logic programming. In particular, it investigates the chances ..."
Abstract

Cited by 27 (2 self)
 Add to MetaCart
Machine learning strongly relies on the covering test to assess whether a candidate hypothesis covers training examples. The present paper investigates learning relational concepts from examples, termed relational learning or inductive logic programming. In particular, it investigates the chances of success and the computational cost of relational learning, which appears to be severely affected by the presence of a phase transition in the covering test. To this aim, three uptodate relational learners have been applied to a wide range of artificial, fully relational learning problems. A first experimental observation is that the phase transition behaves as an attractor for relational learning; no matter which region the learning problem belongs to, all three learners produce hypotheses lying within or close to the phase transition region. Second, a failure region appears. All three learners fail to learn any accurate hypothesis in this region. Quite surprisingly, the probability of failure does not systematically increase with the size of the underlying target concept: under some circumstances, longer concepts may be easier to accurately approximate than shorter ones. Some interpretations for these findings are proposed and discussed.
The Phase Transition Behaviour of Maintaining Arc Consistency
 In Proceedings of ECAI96
, 1995
"... In this paper, we study two recently presented algorithms employing a "full lookahead" strategy: MAC (Maintaining Arc Consistency); and the hybrid MACCBJ, which combines conflictdirected backjumping capability with MAC. We observe their behaviour with respect to the phase transition properties of ..."
Abstract

Cited by 24 (6 self)
 Add to MetaCart
In this paper, we study two recently presented algorithms employing a "full lookahead" strategy: MAC (Maintaining Arc Consistency); and the hybrid MACCBJ, which combines conflictdirected backjumping capability with MAC. We observe their behaviour with respect to the phase transition properties of randomlygenerated binary constraint satisfaction problems, and investigate the benefits of maintaining a higher level of consistency during search by comparing MAC and MACCBJ with the FC and FCCBJ algorithms, which maintain only node consistency. The phase transition behaviour that has been observed for many classes of problem as a control parameter is varied has prompted a flurry of research activity in recent years. Studies of these transitions, from regions where most problems are easy and soluble to regions where most are easy but insoluble, have raised a number of important issues such as the phenomenon of exceptionally hard problems ("ehps") in the easysoluble region, and the grow...