Results 1  10
of
31
Backbones in Optimization and Approximation
 IN IJCAI01
, 2001
"... We study the impact of backbones in optimization and approximation problems. We show that some optimization problems like graph coloring resemble decision problems, with problem hardness positively correlated with backbone size. For other optimization problems like blocks world planning and tr ..."
Abstract

Cited by 28 (3 self)
 Add to MetaCart
We study the impact of backbones in optimization and approximation problems. We show that some optimization problems like graph coloring resemble decision problems, with problem hardness positively correlated with backbone size. For other optimization problems like blocks world planning and traveling salesperson problems, problem hardness is weakly and negatively correlated with backbone size, while the cost of finding optimal and approximate solutions is positively correlated with backbone size. A third class of optimization problems like number partitioning have regions of both types of behavior. We find that to observe the impact of backbone size on problem hardness, it is necessary to eliminate some symmetries, perform trivial reductions and factor out the effective problem size.
Condensing Uncertainty via Incremental Treatment Learning
 ANNALS OF SOFTWARE ENGINEERING, SPECIAL ISSUE ON COMPUTATIONAL INTELLIGENCE. TO APPEAR.
, 2002
"... Models constrain the range of possible behaviors de£ned for a domain. When parts of a model are uncertain, the possible behaviors may be a data cloud: i.e. an overwhelming range of possibilities that bewilder an analyst. Faced with large data clouds, it is hard to demonstrate that any particular de ..."
Abstract

Cited by 22 (18 self)
 Add to MetaCart
Models constrain the range of possible behaviors de£ned for a domain. When parts of a model are uncertain, the possible behaviors may be a data cloud: i.e. an overwhelming range of possibilities that bewilder an analyst. Faced with large data clouds, it is hard to demonstrate that any particular decision leads to a particular outcome. Even if we can’t make de£nite decisions from such models, it is possible to £nd decisions that reduce the variance of values within a data cloud. Also, it is possible to change the range of these future behaviors such that the cloud condenses to some improved mode. Our approach uses two tools. Firstly, a model simulator is constructed that knows the range of possible values for uncertain parameters. Secondly, the TAR2 treatment learner uses the output from the simulator to incrementally learn better constraints. In our incremental treatment learning cycle, users review newly discovered treatments before they are added to a growing pool of constraints used by the model simulator.
Problem Difficulty for Tabu Search in JobShop Scheduling
 Artificial Intelligence
, 2002
"... Tabu search algorithms are among the most effective approaches for solving the jobshop scheduling problem (JSP). Yet, we have little understanding of why these algorithms work so well, and under what conditions. We develop a model of problem difficulty for tabu search in the JSP, borrowing from sim ..."
Abstract

Cited by 20 (7 self)
 Add to MetaCart
Tabu search algorithms are among the most effective approaches for solving the jobshop scheduling problem (JSP). Yet, we have little understanding of why these algorithms work so well, and under what conditions. We develop a model of problem difficulty for tabu search in the JSP, borrowing from similar models developed for SAT and other NP  complete problems. We show that the mean distance between random local optima and the nearest optimal solution is highly correlated with the cost of locating optimal solutions to typical, random JSPs. Additionally, this model accounts for the cost of locating suboptimal solutions, and provides an explanation for differences in the relative difficulty of square versus rectangular JSPs. We also identify two important limitations of our model. First, model accuracy is inversely correlated with problem difficulty, and is exceptionally poor for rare, very highcost problem instances. Second, the model is significantly less accurate for structured, nonrandom JSPs. Our results are also likely to be useful in future research on difficulty models of local search in SAT, as local search cost in both SAT and the JSP is largely dictated by the same search space features. Similarly, our research represents the first attempt to quantitatively model the cost of tabu search for any NP complete problem, and may possibly be leveraged in an effort to understand tabu search in problems other than jobshop scheduling.
Phase Transitions and Backbones of 3SAT and Maximum 3SAT
 In Proc. of 7th Int. Conf. on Principles and Practice of Constraint Programming (CP2001
, 2001
"... Many realworld problems involve constraints that cannot be all satisfied. Solving an overconstrained problem then means to find solutions minimizing the number of constraints violated, which is an optimization problem. In this research, we study the behavior of the phase transitions and backbones o ..."
Abstract

Cited by 17 (4 self)
 Add to MetaCart
Many realworld problems involve constraints that cannot be all satisfied. Solving an overconstrained problem then means to find solutions minimizing the number of constraints violated, which is an optimization problem. In this research, we study the behavior of the phase transitions and backbones of constraint optimization problems. We rst investigate the relationship between the phase transitions of Boolean satisfiability, or precisely 3SAT (a wellstudied NPcomplete decision problem), and the phase transitions of MAX 3SAT (an NPhard optimization problem). To bridge the gap between the easyhardeasy phase transitions of 3SAT and the easyhard transitions of MAX 3SAT, we analyze bounded 3SAT, in which solutions of bounded quality, e.g., solutions with at most a constant number of constraints violated, are sufficient.
Lean clausesets: Generalizations of minimally unsatisfiable clausesets
 Discrete Applied Mathematics
, 2000
"... We study the problem of (efficiently) deleting such clauses from conjunctive normal forms (clausesets) which can not contribute to any proof of unsatisfiability. For that purpose we introduce the notion of an autarky system, associated with a canonical normal form for every clauseset by deleti ..."
Abstract

Cited by 15 (8 self)
 Add to MetaCart
We study the problem of (efficiently) deleting such clauses from conjunctive normal forms (clausesets) which can not contribute to any proof of unsatisfiability. For that purpose we introduce the notion of an autarky system, associated with a canonical normal form for every clauseset by deleting superfluous clauses. Clausesets where no clauses can be deleted are called lean, a natural generalization of minimally unsatisfiable clausesets, opening the possibility for combinatorial approaches (and including also satisfiable instances). Three special examples for autarky systems are considered: general autarkies, linear autarkies (based on linear programming) and matching autarkies (based on matching theory). We give new characterizations of lean and linearly lean clausesets by "universal linear programming problems," while matching lean clausesets are characterized in terms of "deficiency, " the difference between the number of clauses and the number of variables, and ...
Reusing models for requirements engineering
 In First International Workshop on Modelbased Requirements Engineering
, 2001
"... A problem with modelbased requirements engineering is that new projects may lack the data required to customize old models. Such data droughts are a common problem in software engineering and are particularly acute in early life cycle activities such as requirements engineering. When specific data ..."
Abstract

Cited by 15 (13 self)
 Add to MetaCart
A problem with modelbased requirements engineering is that new projects may lack the data required to customize old models. Such data droughts are a common problem in software engineering and are particularly acute in early life cycle activities such as requirements engineering. When specific data relevant to a new project is missing, one technique is to simulate a model across the range of possibilities that might be relevant to a project. This generates voluminous output which can be summarized via a new machine learning technique called treatment learning.
The Backdoor Key: A Path to Understanding Problem Hardness
 IN PROC. OF THE 19TH NAT. CONF. ON AI
, 2004
"... We introduce our work on the backdoor key, a concept that shows promise for characterizing problem hardness in backtracking search algorithms. The general notion of backdoors was recently introduced to explain the source of heavytailed behaviors in backtracking algorithms (Williams, Gomes, & S ..."
Abstract

Cited by 15 (0 self)
 Add to MetaCart
We introduce our work on the backdoor key, a concept that shows promise for characterizing problem hardness in backtracking search algorithms. The general notion of backdoors was recently introduced to explain the source of heavytailed behaviors in backtracking algorithms (Williams, Gomes, & Selman 2003a; 2003b). We describe empirical studies that show that the key faction, i.e., the ratio of the key size to the corresponding backdoor size, is a good predictor of problem hardness of ensembles and individual instances within an ensemble for structure domains with large key fraction.
Local Search on Random 2+pSAT
 In Proc. of the 14th ECAI
, 2000
"... . Random 2+pSAT interpolates between the polynomialtime problem Random 2SAT when p = 0 and the NPcomplete problem Random 3SAT when p = 1. At some value p = p0 0:41, a dramatic change in the structural nature of instances is predicted by statistical mechanics methods. This is reflected by a chan ..."
Abstract

Cited by 9 (2 self)
 Add to MetaCart
. Random 2+pSAT interpolates between the polynomialtime problem Random 2SAT when p = 0 and the NPcomplete problem Random 3SAT when p = 1. At some value p = p0 0:41, a dramatic change in the structural nature of instances is predicted by statistical mechanics methods. This is reflected by a change in the typical cost scaling for a complete search method TABLEAU, seen experimentally. We show empirically the same change of of behaviour in the local search algorithm NOVELTY + , a recent variant of WSAT. Between p = 0:3 and p = 0:5 we see typical cost scaling of NOVELTY + at the 50% satisfiability point apparently change from slow polynomial growth to superpolynomial. That this behaviour is seen in two such different algorithms lends credibility to the hypothesis that there is change of typicalcase complexity around p0 . Previous work linked the emergence of a backbone of fully constrained variables to the cost peak seen in Random kSAT. Initial experiments suggest that for those...
Just enough learning (of association rules
 In WVU CSEE tech report, 2002. Available from http://tim.menzies.com/pdf/02tar2.pdf
"... An overzealous machine learner can automatically generate large, intricate, theories which can be hard to understand. However, such intricate learning is not necessary in domains that lack complex relationships. A much simpler learner can suffice in domains with narrow funnels; i.e. where most doma ..."
Abstract

Cited by 7 (5 self)
 Add to MetaCart
An overzealous machine learner can automatically generate large, intricate, theories which can be hard to understand. However, such intricate learning is not necessary in domains that lack complex relationships. A much simpler learner can suffice in domains with narrow funnels; i.e. where most domain variables are controlled by a very small subset. Such a learner is TAR2: a weightedclass minimal contrastset association rule learner that utilizes confidencebased pruning, but not supportbased pruning. TAR2 learns treatments; i.e. constraints that can change an agent’s environment. Treatments take two forms. Controller treatments hold the smallest number of conjunctions that most improve the current state of the system. Monitor treatments hold the smallest number of conjunctions that best detect future faulty system behavior. Such treatments tell an agent what to do (apply the controller) and what to watch for (the monitor conditions) within the current environment. Because TAR2 generates very small theories, our experience has been that users prefer its tiny treatments. The success of such a simple learner suggests that many domains lack complex relationships.
Understanding Algorithm Performance on an Oversubscribed Scheduling Application
 Journal of Artificial Intelligence Research
, 2006
"... The best performing algorithms for a particular oversubscribed scheduling application, Air Force Satellite Control Network (AFSCN) scheduling, appear to have little in common. Yet, through careful experimentation and modeling of performance in real problem instances, we can relate characteristics of ..."
Abstract

Cited by 6 (3 self)
 Add to MetaCart
The best performing algorithms for a particular oversubscribed scheduling application, Air Force Satellite Control Network (AFSCN) scheduling, appear to have little in common. Yet, through careful experimentation and modeling of performance in real problem instances, we can relate characteristics of the best algorithms to characteristics of the application. In particular, we find that plateaus dominate the search spaces (thus favoring algorithms that make larger changes to solutions) and that some randomization in exploration is critical to good performance (due to the lack of gradient information on the plateaus). Based on our explanations of algorithm performance, we develop a new algorithm that combines characteristics of the best performers; the new algorithm’s performance is better than the previous best. We show how hypothesis driven experimentation and search modeling can both explain algorithm performance and motivate the design of a new algorithm. 1.