Results 1  10
of
45
Relational Learning as Search in a Critical Region
 Journal of Machine Learning Research
, 2003
"... Machine learning strongly relies on the covering test to assess whether a candidate hypothesis covers training examples. The present paper investigates learning relational concepts from examples, termed relational learning or inductive logic programming. In particular, it investigates the chances ..."
Abstract

Cited by 28 (2 self)
 Add to MetaCart
Machine learning strongly relies on the covering test to assess whether a candidate hypothesis covers training examples. The present paper investigates learning relational concepts from examples, termed relational learning or inductive logic programming. In particular, it investigates the chances of success and the computational cost of relational learning, which appears to be severely affected by the presence of a phase transition in the covering test. To this aim, three uptodate relational learners have been applied to a wide range of artificial, fully relational learning problems. A first experimental observation is that the phase transition behaves as an attractor for relational learning; no matter which region the learning problem belongs to, all three learners produce hypotheses lying within or close to the phase transition region. Second, a failure region appears. All three learners fail to learn any accurate hypothesis in this region. Quite surprisingly, the probability of failure does not systematically increase with the size of the underlying target concept: under some circumstances, longer concepts may be easier to accurately approximate than shorter ones. Some interpretations for these findings are proposed and discussed.
Statistical regimes across constrainedness regions
 Constraints
, 2004
"... Abstract. Much progress has been made in terms of boosting the effectiveness of backtrack style search methods. In addition, during the last decade, a much better understanding of problem hardness, typical case complexity, and backtrack search behavior has been obtained. One example of a recent insi ..."
Abstract

Cited by 27 (3 self)
 Add to MetaCart
Abstract. Much progress has been made in terms of boosting the effectiveness of backtrack style search methods. In addition, during the last decade, a much better understanding of problem hardness, typical case complexity, and backtrack search behavior has been obtained. One example of a recent insight into backtrack search concerns socalled heavytailed behavior in randomized versions of backtrack search. Such heavytails explain the large variations in runtime often observed in practice. However, heavytailed behavior does certainly not occur on all instances. This has led to a need for a more precise characterization of when heavytailedness does and when it does not occur in backtrack search. In this paper, we provide such a characterization. We identify different statistical regimes of the tail of the runtime distributions of randomized backtrack search methods and show how they are correlated with the “sophistication ” of the search procedure combined with the inherent hardness of the instances. We also show that the runtime distribution regime is highly correlated with the distribution of the depth of inconsistent subtrees discovered during the search. In particular, we show that an exponential distribution of the depth of inconsistent subtrees combined with a search space that grows exponentially with the depth of the inconsistent subtrees implies heavytailed behavior. 1
Quantum Computing and Phase Transitions in Combinatorial Search
 J. of Artificial Intelligence Research
, 1996
"... We introduce an algorithm for combinatorial search on quantum computers that is capable of significantly concentrating amplitude into solutions for some NP search problems, on average. This is done by exploiting the same aspects of problem structure as used by classical backtrack methods to avoid un ..."
Abstract

Cited by 21 (7 self)
 Add to MetaCart
We introduce an algorithm for combinatorial search on quantum computers that is capable of significantly concentrating amplitude into solutions for some NP search problems, on average. This is done by exploiting the same aspects of problem structure as used by classical backtrack methods to avoid unproductive search choices. This quantum algorithm is much more likely to find solutions than the simple direct use of quantum parallelism. Furthermore, empirical evaluation on small problems shows this quantum algorithm displays the same phase transition behavior, and at the same location, as seen in many previously studied classical search methods. Specifically, difficult problem instances are concentrated near the abrupt change from underconstrained to overconstrained problems. August
Problem Difficulty for Tabu Search in JobShop Scheduling
 Artificial Intelligence
, 2002
"... Tabu search algorithms are among the most effective approaches for solving the jobshop scheduling problem (JSP). Yet, we have little understanding of why these algorithms work so well, and under what conditions. We develop a model of problem difficulty for tabu search in the JSP, borrowing from sim ..."
Abstract

Cited by 21 (8 self)
 Add to MetaCart
Tabu search algorithms are among the most effective approaches for solving the jobshop scheduling problem (JSP). Yet, we have little understanding of why these algorithms work so well, and under what conditions. We develop a model of problem difficulty for tabu search in the JSP, borrowing from similar models developed for SAT and other NP  complete problems. We show that the mean distance between random local optima and the nearest optimal solution is highly correlated with the cost of locating optimal solutions to typical, random JSPs. Additionally, this model accounts for the cost of locating suboptimal solutions, and provides an explanation for differences in the relative difficulty of square versus rectangular JSPs. We also identify two important limitations of our model. First, model accuracy is inversely correlated with problem difficulty, and is exceptionally poor for rare, very highcost problem instances. Second, the model is significantly less accurate for structured, nonrandom JSPs. Our results are also likely to be useful in future research on difficulty models of local search in SAT, as local search cost in both SAT and the JSP is largely dictated by the same search space features. Similarly, our research represents the first attempt to quantitatively model the cost of tabu search for any NP complete problem, and may possibly be leveraged in an effort to understand tabu search in problems other than jobshop scheduling.
Phase Transitions and Backbones of 3SAT and Maximum 3SAT
 In Proc. of 7th Int. Conf. on Principles and Practice of Constraint Programming (CP2001
, 2001
"... Many realworld problems involve constraints that cannot be all satisfied. Solving an overconstrained problem then means to find solutions minimizing the number of constraints violated, which is an optimization problem. In this research, we study the behavior of the phase transitions and backbones o ..."
Abstract

Cited by 17 (4 self)
 Add to MetaCart
Many realworld problems involve constraints that cannot be all satisfied. Solving an overconstrained problem then means to find solutions minimizing the number of constraints violated, which is an optimization problem. In this research, we study the behavior of the phase transitions and backbones of constraint optimization problems. We rst investigate the relationship between the phase transitions of Boolean satisfiability, or precisely 3SAT (a wellstudied NPcomplete decision problem), and the phase transitions of MAX 3SAT (an NPhard optimization problem). To bridge the gap between the easyhardeasy phase transitions of 3SAT and the easyhard transitions of MAX 3SAT, we analyze bounded 3SAT, in which solutions of bounded quality, e.g., solutions with at most a constant number of constraints violated, are sufficient.
Market Protocols for Decentralized Supply Chain Formation
, 2001
"... In order to effectively respond to changing market conditions, business partners must be able to rapidly form supply chains. This thesis approaches the problem of automating supply chain formation—the process of determining the participants in a supply chain, who will exchange what with whom, and th ..."
Abstract

Cited by 15 (3 self)
 Add to MetaCart
In order to effectively respond to changing market conditions, business partners must be able to rapidly form supply chains. This thesis approaches the problem of automating supply chain formation—the process of determining the participants in a supply chain, who will exchange what with whom, and the terms of the exchanges—within an economic framework. In this thesis, supply chain formation is formalized as task dependency networks. This model captures subtask decomposition in the presence of resource contention—two important and challenging aspects of supply chain formation. In order to form supply chains in a decentralized fashion, price systems provide an economic framework for guiding the decisions of selfinterested agents. In competitive price equilibrium, agents choose optimal allocations with respect to prices, and outcomes are optimal overall. Approximate competitive equilibria yield approximately optimal allocations. Different market protocols are proposed for agents to negotiate the allocation of resources to form supply chains. In the presence of resource contention, these protocols produce better solutions than the greedy protocols common in the artificial intelligence
Distributed stochastic search for constraint satisfaction and optimization: Parallelism, phase transitions and performance
 in PAS
, 2002
"... Many distributed problems can be captured as distributed constraint satisfaction problems (CSPs) and constraint optimization problems (COPs). In this research, we study an existing distributed search method, called distributed stochastic algorithm (DSA), and its variations for solving distributed CS ..."
Abstract

Cited by 15 (0 self)
 Add to MetaCart
Many distributed problems can be captured as distributed constraint satisfaction problems (CSPs) and constraint optimization problems (COPs). In this research, we study an existing distributed search method, called distributed stochastic algorithm (DSA), and its variations for solving distributed CSPs and COPs. We analyze the relationship between the degree of parallel executions of distributed processes and DSAs ’ performance, including solution quality and communication cost. Our experimental results show that DSAs ’ performance exhibits phasetransition patterns. When the degree of parallel executions increases beyond some critical level, DSAs ’ performance degrades abruptly and dramatically, changing from near optimal solutions to solutions even worse than random solutions. Our experimental results also show that DSAs are generally more effective and efficient than distributed breakout algorithm on many network structures, particularly on overconstrained structures, finding better solutions and having lower communication cost.
Planning with Conflicting Advice
, 2000
"... The paradigm of advisable planning,inwhichauser provides guidance to influence the content of solutions produced by an underlying planning system, holds much promise for improved usability of planning technology. The success of this approach, however, requires that a planner respond appropriate ..."
Abstract

Cited by 12 (1 self)
 Add to MetaCart
The paradigm of advisable planning,inwhichauser provides guidance to influence the content of solutions produced by an underlying planning system, holds much promise for improved usability of planning technology. The success of this approach, however, requires that a planner respond appropriately when presented with conflicting advice. This paper introduces two contrasting methods for planning with conflicting advice, suited to different user requirements. Soft enforcement embodies a heuristic approach that prefers planning choices that are consistent with specified advice but will disregard advice that introduces conflicts. Soft enforcement enables rapid generation of solutions but with suboptimal results. Local maxima search navigates through the space of advice subsets, using strict enforcement techniques to identify satisfiable subsets of advice. As more time is allocated, the search will yield increasingly better results. The paper presents specific algorithms f...
Optimisation Techniques for Expressive Description Logics
, 1997
"... This report describes and evaluates optimisation techniques for a tableaux based satisfiability testing algorithm used to compute subsumption in Grail, an expressive description logic. Five techniques are studied in detail: normalisation and encoding, indexing, semantic branching, dependency directe ..."
Abstract

Cited by 11 (1 self)
 Add to MetaCart
This report describes and evaluates optimisation techniques for a tableaux based satisfiability testing algorithm used to compute subsumption in Grail, an expressive description logic. Five techniques are studied in detail: normalisation and encoding, indexing, semantic branching, dependency directed backtracking and caching. The effectiveness of these techniques is evaluated by empirical testing using a large knowledge base from the Galen project. The performance of the optimised classifier and subsumption test are also compared with that of the Kris classifier and KSAT satisfiability testing procedure using both the Galen knowledge base and randomly generated test data.