Results 1  10
of
72
Algorithms for the Satisfiability (SAT) Problem: A Survey
 DIMACS Series in Discrete Mathematics and Theoretical Computer Science
, 1996
"... . The satisfiability (SAT) problem is a core problem in mathematical logic and computing theory. In practice, SAT is fundamental in solving many problems in automated reasoning, computeraided design, computeraided manufacturing, machine vision, database, robotics, integrated circuit design, compute ..."
Abstract

Cited by 131 (3 self)
 Add to MetaCart
. The satisfiability (SAT) problem is a core problem in mathematical logic and computing theory. In practice, SAT is fundamental in solving many problems in automated reasoning, computeraided design, computeraided manufacturing, machine vision, database, robotics, integrated circuit design, computer architecture design, and computer network design. Traditional methods treat SAT as a discrete, constrained decision problem. In recent years, many optimization methods, parallel algorithms, and practical techniques have been developed for solving SAT. In this survey, we present a general framework (an algorithm space) that integrates existing SAT algorithms into a unified perspective. We describe sequential and parallel SAT algorithms including variable splitting, resolution, local search, global optimization, mathematical programming, and practical SAT algorithms. We give performance evaluation of some existing SAT algorithms. Finally, we provide a set of practical applications of the sat...
Finding Hard Instances of the Satisfiability Problem: A Survey
, 1997
"... . Finding sets of hard instances of propositional satisfiability is of interest for understanding the complexity of SAT, and for experimentally evaluating SAT algorithms. In discussing this we consider the performance of the most popular SAT algorithms on random problems, the theory of average case ..."
Abstract

Cited by 119 (1 self)
 Add to MetaCart
. Finding sets of hard instances of propositional satisfiability is of interest for understanding the complexity of SAT, and for experimentally evaluating SAT algorithms. In discussing this we consider the performance of the most popular SAT algorithms on random problems, the theory of average case complexity, the threshold phenomenon, known lower bounds for certain classes of algorithms, and the problem of generating hard instances with solutions.
On the learnability of discrete distributions
 In The 25th Annual ACM Symposium on Theory of Computing
, 1994
"... We introduce and investigate a new model of learning probability distributions from independent draws. Our model is inspired by the popular Probably Approximately Correct (PAC) model for learning boolean functions from labeled ..."
Abstract

Cited by 95 (11 self)
 Add to MetaCart
We introduce and investigate a new model of learning probability distributions from independent draws. Our model is inspired by the popular Probably Approximately Correct (PAC) model for learning boolean functions from labeled
On the spheredecoding algorithm I. Expected complexity
 IEEE Trans. Sig. Proc
, 2005
"... Abstract—The problem of finding the leastsquares solution to a system of linear equations where the unknown vector is comprised of integers, but the matrix coefficient and given vector are comprised of real numbers, arises in many applications: communications, cryptography, GPS, to name a few. The ..."
Abstract

Cited by 91 (7 self)
 Add to MetaCart
Abstract—The problem of finding the leastsquares solution to a system of linear equations where the unknown vector is comprised of integers, but the matrix coefficient and given vector are comprised of real numbers, arises in many applications: communications, cryptography, GPS, to name a few. The problem is equivalent to finding the closest lattice point to a given point and is known to be NPhard. In communications applications, however, the given vector is not arbitrary but rather is an unknown lattice point that has been perturbed by an additive noise vector whose statistical properties are known. Therefore, in this paper, rather than dwell on the worstcase complexity of the integer leastsquares problem, we study its expected complexity, averaged over the noise and over the lattice. For the “sphere decoding” algorithm of Fincke and Pohst, we find a closedform expression for the expected complexity, both for the infinite and finite lattice.
Genericcase complexity and decision problems in group theory, preprint
, 2003
"... Abstract. We give a precise definition of “genericcase complexity” and show that for a very large class of finitely generated groups the classical decision problems of group theory the word, conjugacy and membership problems all have lineartime genericcase complexity. We prove such theorems by ..."
Abstract

Cited by 49 (22 self)
 Add to MetaCart
Abstract. We give a precise definition of “genericcase complexity” and show that for a very large class of finitely generated groups the classical decision problems of group theory the word, conjugacy and membership problems all have lineartime genericcase complexity. We prove such theorems by using the theory of random walks on regular graphs. Contents 1. Motivation
Threshold values of Random KSAT from the cavity method, Random Structures Algorithms 28
, 2006
"... ..."
Averagecase computational complexity theory
 Complexity Theory Retrospective II
, 1997
"... ABSTRACT Being NPcomplete has been widely interpreted as being computationally intractable. But NPcompleteness is a worstcase concept. Some NPcomplete problems are \easy on average", but some may not be. How is one to know whether an NPcomplete problem is \di cult on average"? ..."
Abstract

Cited by 31 (2 self)
 Add to MetaCart
ABSTRACT Being NPcomplete has been widely interpreted as being computationally intractable. But NPcompleteness is a worstcase concept. Some NPcomplete problems are \easy on average&quot;, but some may not be. How is one to know whether an NPcomplete problem is \di cult on average&quot;? The theory of averagecase computational complexity, initiated by Levin about ten years ago, is devoted to studying this problem. This paper is an attempt to provide an overview of the main ideas and results in this important new subarea of complexity theory. 1
Improved fast syndrome based cryptographic hash functions
 in Proceedings of ECRYPT Hash Workshop 2007 (2007). URL: http://wwwroc.inria.fr/secret/Matthieu.Finiasz
"... Abstract. Recently, some collisions have been exposed for a variety of cryptographic hash functions [19] including some of the most widely used today. Many other hash functions using similar constrcutions can however still be considered secure. Nevertheless, this has drawn attention on the need for ..."
Abstract

Cited by 28 (6 self)
 Add to MetaCart
Abstract. Recently, some collisions have been exposed for a variety of cryptographic hash functions [19] including some of the most widely used today. Many other hash functions using similar constrcutions can however still be considered secure. Nevertheless, this has drawn attention on the need for new hash function designs. In this article is presented a familly of secure hash functions, whose security is directly related to the syndrome decoding problem from the theory of errorcorrecting codes. Taking into account the analysis by Coron and Joux [4] based on Wagner’s generalized birthday algorithm [18] we study the asymptotical security of our functions. We demonstrate that this attack is always exponential in terms of the length of the hash value. We also study the workfactor of this attack, along with other attacks from coding theory, for non asymptotic range, i.e. for practical values. Accordingly, we propose a few sets of parameters giving a good security and either a faster hashing or a shorter desciption for the function. Key Words: cryptographic hash functions, provable security, syndrome decoding, NPcompleteness, Wagner’s generalized birthday problem.
Smoothed Analysis of Three Combinatorial Problems
 Proc. of the 28th Int. Symp. on Mathematical Foundations of Computer Science (MFCS), volume 2747 of Lecture Notes in Computer Science
, 2003
"... Smoothed analysis combines elements over worstcase and average case analysis. For an instance x the smoothed complexity is the average complexity of an instance obtained from x by a perturbation. The smoothed complexity of a problem is the worst smoothed complexity of any instance. Spielman and ..."
Abstract

Cited by 23 (1 self)
 Add to MetaCart
Smoothed analysis combines elements over worstcase and average case analysis. For an instance x the smoothed complexity is the average complexity of an instance obtained from x by a perturbation. The smoothed complexity of a problem is the worst smoothed complexity of any instance. Spielman and Teng introduced this notion for continuous problems. We apply the concept to combinatorial problems and study the smoothed complexity of three classical discrete problems: quicksort, lefttoright maxima counting, and shortest paths. This opens a vast eld of nice analyses (using for example generating functions in the discrete case) which should lead to a better understanding of complexity landscapes of algorithms.