Results 1  10
of
45
A New Method for Solving Hard Satisfiability Problems
 AAAI
, 1992
"... We introduce a greedy local search procedure called GSAT for solving propositional satisfiability problems. Our experiments show that this procedure can be used to solve hard, randomly generated problems that are an order of magnitude larger than those that can be handled by more traditional approac ..."
Abstract

Cited by 707 (22 self)
 Add to MetaCart
We introduce a greedy local search procedure called GSAT for solving propositional satisfiability problems. Our experiments show that this procedure can be used to solve hard, randomly generated problems that are an order of magnitude larger than those that can be handled by more traditional approaches such as the DavisPutnam procedure or resolution. We also show that GSAT can solve structured satisfiability problems quickly. In particular, we solve encodings of graph coloring problems, Nqueens, and Boolean induction. General application strategies and limitations of the approach are also discussed. GSAT is best viewed as a modelfinding procedure. Its good performance suggests that it may be advantageous to reformulate reasoning tasks that have traditionally been viewed as theoremproving problems as modelfinding tasks.
Pushing the Envelope: Planning, Propositional Logic, and Stochastic Search
, 1996
"... Planning is a notoriously hard combinatorial search problem. In many interesting domains, current planning algorithms fail to scale up gracefully. By combining a general, stochastic search algorithm and appropriate problem encodings based on propositional logic, we are able to solve hard planning pr ..."
Abstract

Cited by 548 (31 self)
 Add to MetaCart
Planning is a notoriously hard combinatorial search problem. In many interesting domains, current planning algorithms fail to scale up gracefully. By combining a general, stochastic search algorithm and appropriate problem encodings based on propositional logic, we are able to solve hard planning problems many times faster than the best current planning systems. Although stochastic methods have been shown to be very e ective on a wide range of scheduling problems, this is the rst demonstration of its power on truly challenging classical planning instances. This work also provides a new perspective on representational issues in planning.
Minimizing Conflicts: A Heuristic Repair Method for ConstraintSatisfaction and Scheduling Problems
 J. ARTIFICIAL INTELLIGENCE RESEARCH
, 1993
"... This paper describes a simple heuristic approach to solving largescale constraint satisfaction and scheduling problems. In this approach one starts with an inconsistent assignment for a set of variables and searches through the space of possible repairs. The search can be guided by a valueorder ..."
Abstract

Cited by 430 (6 self)
 Add to MetaCart
This paper describes a simple heuristic approach to solving largescale constraint satisfaction and scheduling problems. In this approach one starts with an inconsistent assignment for a set of variables and searches through the space of possible repairs. The search can be guided by a valueordering heuristic, the minconflicts heuristic, that attempts to minimize the number of constraint violations after each step. The heuristic can be used with a variety of different search strategies. We demonstrate empirically that on the nqueens problem, a technique based on this approach performs orders of magnitude better than traditional backtracking techniques. We also describe a scheduling application where the approach has been used successfully. A theoretical analysis is presented both to explain why this method works well on certain types of problems and to predict when it is likely to be most effective.
Genet: A connectionist architecture for solving constraint satisfaction problems by iterative improvement
 In Proceedings of AAAI'94
, 1994
"... New approaches to solving constraint satisfaction problems using iterative improvement techniques have been found to be successful on certain, very large problems such as the million queens. However, on highly constrained problems it is possible for these methods to get caught in local minima. In th ..."
Abstract

Cited by 97 (23 self)
 Add to MetaCart
(Show Context)
New approaches to solving constraint satisfaction problems using iterative improvement techniques have been found to be successful on certain, very large problems such as the million queens. However, on highly constrained problems it is possible for these methods to get caught in local minima. In this paper we present genet, a connectionist architecture for solving binary and general constraint satisfaction problems by iterative improvement. genet incorporates a learning strategy to escape from local minima. Although genet has been designed to be implemented on vlsi hardware, we present empirical evidence to show that even when simulated on a single processor genet can outperform existing iterative improvement techniques on hard instances of certain constraint satisfaction problems.
A General Stochastic Approach to Solving Problems with Hard and Soft Constraints
 The Satisfiability Problem: Theory and Applications
, 1996
"... . Many AI problems can be conveniently encoded as discrete constraint satisfaction problems. It is often the case that not all solutions to a CSP are equally desirable  in general, one is interested in a set of "preferred" solutions (for example, solutions that minimize some cost functi ..."
Abstract

Cited by 51 (1 self)
 Add to MetaCart
(Show Context)
. Many AI problems can be conveniently encoded as discrete constraint satisfaction problems. It is often the case that not all solutions to a CSP are equally desirable  in general, one is interested in a set of "preferred" solutions (for example, solutions that minimize some cost function) . Preferences can be encoded by incorporating "soft" constraints in the problem instance. We show how both hard and soft constraints can be handled by encoding problems as instances of weighted MAXSAT (finding a model that maximizes the sum of the weights of the satisfied clauses that make up a problem instance). We generalize a localsearch algorithm for satisfiability to handle weighted MAXSAT. To demonstrate the effectiveness of our approach, we present experimental results on encodings of a set of wellstudied network Steinertree problems. This approach turns out to be competitive with some of the best current specialized algorithms developed in operations research. 1. Introduction Traditi...
Spike: Intelligent scheduling of hubble space telescope observations
 Intelligent Scheduling
, 1994
"... ..."
(Show Context)
Solving Problems with Hard and Soft Constraints Using a Stochastic Algorithm for MAXSAT
, 1995
"... Stochastic local search is an effective technique for solving certain classes of large, hard propositional satisfiability problems, including propositional encodings of problems such as circuit synthesis and graph coloring (Selman, Levesque, and Mitchell 1992; Selman, Kautz, and Cohen 1994). Many pr ..."
Abstract

Cited by 42 (3 self)
 Add to MetaCart
Stochastic local search is an effective technique for solving certain classes of large, hard propositional satisfiability problems, including propositional encodings of problems such as circuit synthesis and graph coloring (Selman, Levesque, and Mitchell 1992; Selman, Kautz, and Cohen 1994). Many problems of interest to AI and operations research cannot be conveniently encoded as simple satisfiability, because they involve both hard and soft constraints  that is, any solution may have to violate some of the less important constraints. We show how both kinds of constraints can be handled by encoding problems as instances of weighted MAXSAT (finding a model that maximizes the sum of the weights of the satisfied clauses that make up a problem instance). We generalize our localsearch algorithm for satisfiability (GSAT) to handle weighted MAXSAT, and present experimental results on encodings of the Steiner tree problem, which is a wellstudied hard combinatorial search problem. On many...
Solving Constraint Satisfaction Problems Using Neural Networks
, 1991
"... this paper, we describe GENET, a generic neural network simulator, that can solve general CSPs with finite domains. GENET generates a sparsely connected network for a given CSP with constraints C specified as binary matrices, and simulates the network convergence procedure. In case the network falls ..."
Abstract

Cited by 39 (13 self)
 Add to MetaCart
this paper, we describe GENET, a generic neural network simulator, that can solve general CSPs with finite domains. GENET generates a sparsely connected network for a given CSP with constraints C specified as binary matrices, and simulates the network convergence procedure. In case the network falls into local minima, a heuristic learning rule will be applied to escape from them. The network model lends itself to massively parallel processing. The experimental results of applying GENET to randomly generated, including very tight constrained, CSPs and the real life problem of car sequencing will be reported and an analysis of the effectiveness of GENET will be given. NETWORK MODEL The network model is based on the Interactive Activation model (IA) with modifications to suit the natures of the CSPs as defined at the beginning of this paper. The IA model in its original form can be characterized as weak constraint satisfaction, in which the connections represent the coherence, or compatibility, between the connected nodes. This model was developed for associative information retrieval or pattern matching [11, 12]. However, it is not adequate for solving CSPs in general, for which all the constraints are absolute and none of them should be violated at all. For this purpose, the following modifications have been developed. 1. The nodes in the network are grouped into clusters with each cluster representing a variable in Z, and the nodes in each cluster represent the values that can be assigned to the variable. 2. Only inhibitory connections are allowed. The inhibitory connections represent the constraints that do not allow the connected nodes to be active (i.e. turned on) simultaneously. 3. The nodes in the same cluster compete with each other in convergence cycles. The node...
Minimizing con icts: a heuristic repair methodfor constraint satisfaction andscheduling problems
 Artif. Intell
, 1992
"... Abbreviated Title: \Minimizing Con icts: A Heuristic Repair Method" This paper describes a simple heuristic approach to solving largescale constraint satisfaction and scheduling problems. In this approach one starts with an inconsistent assignment for a set of variables and searches through th ..."
Abstract

Cited by 36 (1 self)
 Add to MetaCart
Abbreviated Title: \Minimizing Con icts: A Heuristic Repair Method" This paper describes a simple heuristic approach to solving largescale constraint satisfaction and scheduling problems. In this approach one starts with an inconsistent assignment for a set of variables and searches through the space of possible repairs. The search can be guided by avalueordering heuristic, the mincon icts heuristic, that attempts to minimize the number of constraint violations after each step. The heuristic can be used with a variety of di erent search strategies. We demonstrate empirically that on the nqueens problem, a technique based on this approach performs orders of magnitude better than traditional backtracking techniques. We also describe a scheduling application where the approach has been used successfully. A theoretical analysis is presented both to explain why this method works well on certain types of problems and to predict when it is likely to be One of the most promising general approaches for solving combinatorial search problems is to generate an
Neural Networks for Combinatorial Optimization: A Review of More Than a Decade of Research
, 1999
"... This article briefly summarizes the work that has been done and presents the current standing of neural networks for combinatorial optimization by considering each of the major classes of combinatorial optimization problems. Areas which have not yet been studied are identified for future research. ..."
Abstract

Cited by 34 (0 self)
 Add to MetaCart
This article briefly summarizes the work that has been done and presents the current standing of neural networks for combinatorial optimization by considering each of the major classes of combinatorial optimization problems. Areas which have not yet been studied are identified for future research.