## GSAT versus Simulated Annealing (1994)

### Cached

### Download Links

- [aida.intellektik.informatik.tu-darmstadt.de]
- [kirmes.inferenzsysteme.informatik.tu-darmstadt.de]
- DBLP

### Other Repositories/Bibliography

Venue: | Proceedings of the European Conference on Artificial Intelligence |

Citations: | 14 - 5 self |

### BibTeX

@INPROCEEDINGS{Beringer94gsatversus,

author = {A. Beringer and G. Aschemann and H. Hoos and A. Weiß},

title = {GSAT versus Simulated Annealing},

booktitle = {Proceedings of the European Conference on Artificial Intelligence},

year = {1994},

pages = {130--134},

publisher = {John Wiley & Sons}

}

### Years of Citing Articles

### OpenURL

### Abstract

. The question of satisfiability for a given propositional formula arises in many areas of AI. Especially finding a model for a satisfiable formula is very important though known to be NP-complete. There exist complete algorithms for satisfiability testing like the Davis-Putnam-Algorithm, but they often do not construct a satisfying assignment for the formula, are not practically applicable for more than 400 or 500 variable problems, or in practice take too much time to find a solution. Recently, a (in practice) very fast, though incomplete, procedure, the model generating algorithm GSAT, has been introduced and several refined variants were created. Another method is Simulated Annealing (SA). Both approaches have already been compared with different results. We clarify these differences and do a more elaborate comparison showing that the performance of an already optimized variant of GSAT and an ordinary SA algorithm are more or less the same and that the attempts to further improve G...

### Citations

3557 | Optimization by simulated annealing
- Kirkpatrick, Gelatt, et al.
- 1983
(Show Context)
Citation Context ...s attracted many researchers because of its simplicity. They investigated and tried to further improve GSAT during the last two years. Another, nearly 30 years old, method is Simulated Annealing (SA) =-=[9]-=-, which is often used for optimization and constraint satisfaction problems and can be easily adapted to neural networks. Consequently, it is easy to parallelize. Both approaches have already been com... |

2256 |
Equations of state calculations by fast computing machines
- Metropolis, Rosenbluth, et al.
- 1953
(Show Context)
Citation Context ...st local minimum was reached (which is equivalent to setting the temperature to 0). The distribution for the choice of the flipping variable was a combination of the so-called Metropolis distribution =-=[10]-=- and gradient descent, if mintemp or maxcycles was reached. We also tested the performance of SA with the Glauber distribution [6], where even the probability for improving flips depends on the score ... |

1783 |
Introduction to the Theory of Neural Computation
- Hertz, Krogh, et al.
- 1991
(Show Context)
Citation Context ...ed to find a global minimum in the limit if the annealing is done in infinitesimally small steps until the temperature really reaches 0 (the distribution then causes gradient descent to be performed) =-=[7]-=-. But this result is not suitable in practice, because the convergence time tends to infinity as the difference between two values of the temperature tends to zero, and we consequently must approximat... |

1582 |
Neural networks and physical systems with emergent collective computational abilities
- Hopfield
- 1982
(Show Context)
Citation Context ...cribes a neural network, whose binary threshold units correspond to the variables of the energy function, such that the network's stable states correspond exactly to the local minima of that function =-=[8, 12]-=-. We therefore can use a neural network for SA and the search of models of satisfiable propositional formulas [11]. If the network is in a stable state and its energy has a global minimum, then the ac... |

1071 | A computing procedure for quantification theory - Davis, Putnam - 1960 |

679 | A new method for solving hard satisfiability problems
- Selman, Levesque, et al.
- 1992
(Show Context)
Citation Context ... use GSAT with random walk for the comparison, which scales up significantly better than basic GSAT, whereas Spears [17] uses only basic GSAT for the comparison. In addition, [17] uses the results of =-=[16]-=- which report a significantly higher amount of flips per assignment than new experiments with basic GSAT reported in [15], and he compares basic GSAT with a variant of SA performing also random walk, ... |

215 | Domain-Independent Extensions to GSAT: Solving Large Structured Satisfiability Problems
- Selman, Kautz
- 1993
(Show Context)
Citation Context ...is to evaluate several tries simultaneously using maxtries processors. But this possibility excludes many variants of GSAT using a kind of memory, e.g. averaging in and clause weights as described in =-=[14]-=-, because those use information about previous unsuccessful tries to choose a new initial assignment or change the energy surface, resp. It is also possible to parallelize scoring using N processors a... |

199 | Experimental results on the cross-over point in satisfiability problems - Crawford, Auton - 1993 |

136 | Towards an understanding of hill-climbing procedures for SAT
- Gent, Walsh
- 1993
(Show Context)
Citation Context ...up to maxtries times with a new initial random assignment. This gives a possibility to escape from local minima. A unifying algorithm GenSAT for both methods within the GenSAT-framework introduced by =-=[5]-=- is given in Figure 1. E denotes the energy function corresponding to the input formula (containing N variables), maxcycles and maxtries are natural number parameters. hillClimb computes new assignmen... |

93 |
Time-dependent statistics of the Ising model
- GLAUBER
- 1963
(Show Context)
Citation Context ... variable was a combination of the so-called Metropolis distribution [10] and gradient descent, if mintemp or maxcycles was reached. We also tested the performance of SA with the Glauber distribution =-=[6]-=-, where even the probability for improving flips depends on the score (i.e. a more greedy choice of the variables), but we found the performance of Metropolis to be superior. P (prob(\DeltaE; T ) = tr... |

43 | Simulated annealing for hard satisfiability problems
- Spears
- 1996
(Show Context)
Citation Context ...mization and constraint satisfaction problems and can be easily adapted to neural networks. Consequently, it is easy to parallelize. Both approaches have already been compared, with different results =-=[15, 17]-=-. At first sight, both comparisons seemed unfair in some points, so we tried to clarify these differences and do a more elaborate comparison of the amount of work done by the two algorithms. We run GS... |

16 |
A parallel simulated annealing algorithm
- Boissin, LuttonJ
- 1993
(Show Context)
Citation Context ... change resulting from a certain flip in the current assignment. prob and temperature are described in detail in subsection 3.1. The usual notation for GSAT and SA algorithms can be found in [16] and =-=[17, 2]-=-, respectively. procedure GenSAT(E) for i := 1 to maxtries do A := random assignment for variables in E; if E(A) = 0 then return A else for j := 1 to maxcycles do A := hillClimb(E; A; j); if E(A) = 0 ... |

16 |
Symmetric neural networks and logic satisfiability
- Pinkas
- 1991
(Show Context)
Citation Context ...he network's stable states correspond exactly to the local minima of that function [8, 12]. We therefore can use a neural network for SA and the search of models of satisfiable propositional formulas =-=[11]-=-. If the network is in a stable state and its energy has a global minimum, then the activation of its units gives a model for the underlying propositional formula. This is an interesting feature as it... |

6 |
ICSIM: Initial design of an object-oriented net simulator
- Schmidt
- 1990
(Show Context)
Citation Context ...esults for GSAT are much better than those reported in [15]. For the SA algorithm we used the neural network simulator ICSIM written in the object oriented language SATHER which was developed at ICSI =-=[13]-=-. We simulated a Boltzmann machine with higher order connections [12]. The units in such a network correspond directly to the variables in the propositional formula, which also determines the type of ... |

5 |
Logical Inference in Symmetric Connectionist Networks
- Pinkas
- 1992
(Show Context)
Citation Context ...cribes a neural network, whose binary threshold units correspond to the variables of the energy function, such that the network's stable states correspond exactly to the local minima of that function =-=[8, 12]-=-. We therefore can use a neural network for SA and the search of models of satisfiable propositional formulas [11]. If the network is in a stable state and its energy has a global minimum, then the ac... |

5 |
Local search strategies for satisfiabiliy testing
- Selman, Kautz, et al.
- 1993
(Show Context)
Citation Context ...mization and constraint satisfaction problems and can be easily adapted to neural networks. Consequently, it is easy to parallelize. Both approaches have already been compared, with different results =-=[15, 17]-=-. At first sight, both comparisons seemed unfair in some points, so we tried to clarify these differences and do a more elaborate comparison of the amount of work done by the two algorithms. We run GS... |

1 | GSAT and Simulated Annealing -- a comparison
- Beringer, Aschemann, et al.
- 1994
(Show Context)
Citation Context ...ables and set the parameters as follows: \DeltaT = 0:01, maxtemp = 0.3, mintemp = 0.01, maxtries = 10. We let GSAT and SA do all 10 tries, 3 A detailed comparison of several schedules can be found in =-=[1]-=-. counting how many assignments they found in average. This way of comparison avoids too much influence of a certain initial assignment. By averaging the amount of work to find an assignment over 10 t... |