Results 1  10
of
110
Algorithms for the Satisfiability (SAT) Problem: A Survey
 DIMACS Series in Discrete Mathematics and Theoretical Computer Science
, 1996
"... . The satisfiability (SAT) problem is a core problem in mathematical logic and computing theory. In practice, SAT is fundamental in solving many problems in automated reasoning, computeraided design, computeraided manufacturing, machine vision, database, robotics, integrated circuit design, compute ..."
Abstract

Cited by 131 (3 self)
 Add to MetaCart
. The satisfiability (SAT) problem is a core problem in mathematical logic and computing theory. In practice, SAT is fundamental in solving many problems in automated reasoning, computeraided design, computeraided manufacturing, machine vision, database, robotics, integrated circuit design, computer architecture design, and computer network design. Traditional methods treat SAT as a discrete, constrained decision problem. In recent years, many optimization methods, parallel algorithms, and practical techniques have been developed for solving SAT. In this survey, we present a general framework (an algorithm space) that integrates existing SAT algorithms into a unified perspective. We describe sequential and parallel SAT algorithms including variable splitting, resolution, local search, global optimization, mathematical programming, and practical SAT algorithms. We give performance evaluation of some existing SAT algorithms. Finally, we provide a set of practical applications of the sat...
Finding Hard Instances of the Satisfiability Problem: A Survey
, 1997
"... . Finding sets of hard instances of propositional satisfiability is of interest for understanding the complexity of SAT, and for experimentally evaluating SAT algorithms. In discussing this we consider the performance of the most popular SAT algorithms on random problems, the theory of average case ..."
Abstract

Cited by 119 (1 self)
 Add to MetaCart
. Finding sets of hard instances of propositional satisfiability is of interest for understanding the complexity of SAT, and for experimentally evaluating SAT algorithms. In discussing this we consider the performance of the most popular SAT algorithms on random problems, the theory of average case complexity, the threshold phenomenon, known lower bounds for certain classes of algorithms, and the problem of generating hard instances with solutions.
On the learnability of discrete distributions
 In The 25th Annual ACM Symposium on Theory of Computing
, 1994
"... We introduce and investigate a new model of learning probability distributions from independent draws. Our model is inspired by the popular Probably Approximately Correct (PAC) model for learning boolean functions from labeled ..."
Abstract

Cited by 95 (11 self)
 Add to MetaCart
We introduce and investigate a new model of learning probability distributions from independent draws. Our model is inspired by the popular Probably Approximately Correct (PAC) model for learning boolean functions from labeled
Learning Simple Concepts Under Simple Distributions
 SIAM JOURNAL OF COMPUTING
, 1991
"... We aim at developing a learning theory where `simple' concepts are easily learnable. In Valiant's learning model, many concepts turn out to be too hard (like NP hard) to learn. Relatively few concept classes were shown to be learnable polynomially. In daily life, it seems that things we ..."
Abstract

Cited by 56 (3 self)
 Add to MetaCart
We aim at developing a learning theory where `simple' concepts are easily learnable. In Valiant's learning model, many concepts turn out to be too hard (like NP hard) to learn. Relatively few concept classes were shown to be learnable polynomially. In daily life, it seems that things we care to learn are usually learnable. To model the intuitive notion of learning more closely, we do not require that the learning algorithm learns (polynomially) under all distributions, but only under all simple distributions. A distribution is simple if it is dominated by an enumerable distrib...
Security Preserving Amplification of Hardness
 FOCS
, 1990
"... We consider the task of transforming a weak oneway function (which may be easily inverted on all but a polynomial fraction of the range) into a strong oneway function (which can be easily inverted only on a negligible fraction of the range). The previous known transformation [Yao 82] does not pres ..."
Abstract

Cited by 54 (10 self)
 Add to MetaCart
We consider the task of transforming a weak oneway function (which may be easily inverted on all but a polynomial fraction of the range) into a strong oneway function (which can be easily inverted only on a negligible fraction of the range). The previous known transformation [Yao 82] does not preserve the security (i.e., the runningtime of the inverting algorithm) within any polynomial. Its resulting function F (x) applies the weak oneway function to many small (of length x  ε, ε <1) pieces of the input. Consequently, the function can be inverted for reasonable input lengths by exhaustive search. Using random walks on constructive expanders, we transform any regular (e.g., onetoone) weak oneway function into a strong one, while preserving security. The resulting function F (x) applies the weak oneway f to strings of length Θ(x). Our security preserving constructions yield efficient pseudorandom generators and signatures based on any regular oneway function.
List decoding using the XOR lemma
 Electronic Colloquium on Computational Complexity
, 2003
"... We show that Yao’s XOR Lemma, and its essentially equivalent rephrasing as a Direct Product Lemma, can be reinterpreted as a way of obtaining errorcorrecting codes with good listdecoding algorithms from errorcorrecting codes having weak uniquedecoding algorithms. To get codes with good rate and ..."
Abstract

Cited by 34 (4 self)
 Add to MetaCart
We show that Yao’s XOR Lemma, and its essentially equivalent rephrasing as a Direct Product Lemma, can be reinterpreted as a way of obtaining errorcorrecting codes with good listdecoding algorithms from errorcorrecting codes having weak uniquedecoding algorithms. To get codes with good rate and efficient list decoding algorithms one needs a proof of the Direct Product Lemma that, respectively, is strongly derandomized, and uses very small advice. We show how to reduce advice in Impagliazzo’s proof of the Direct Product Lemma for pairwise independent inputs, which leads to errorcorrecting codes with O(n2) encoding length, Õ(n2) encoding time, and probabilistic Õ(n) listdecoding time. (Note that the decoding time is sublinear in the length of the encoding.) Back to complexity theory, our adviceefficient proof of Impagliazzo’s “hardcore set ” results yields a (weak) uniform version of O’Donnell results on amplification of hardness in NP. We show that if there is a problem in NP that cannot be solved by BPP algorithms on more than a 1 − 1/(logn) c fraction of inputs, then there is a problem in NP that cannot be solved by BPP algorithms on more than a 3/4+1/(logn) c fraction of inputs, where c> 0 is an absolute constant. 1.
The complexity of decision versus search
 SIAM Journal on Computing
, 1994
"... A basic question about NP is whether or not search reduces in polynomial time to decision. We indicate that the answer is negative: under a complexity assumption (that deterministic and nondeterministic doubleexponential time are unequal) we construct a language in NP for which search does not red ..."
Abstract

Cited by 33 (1 self)
 Add to MetaCart
A basic question about NP is whether or not search reduces in polynomial time to decision. We indicate that the answer is negative: under a complexity assumption (that deterministic and nondeterministic doubleexponential time are unequal) we construct a language in NP for which search does not reduce to decision. These ideas extend in a natural way to interactive proofs and program checking. Under similar assumptions we present languages in NP for which it is harder to prove membership interactively than it is to decide this membership, and languages in NP which are not checkable. Keywords: NPcompleteness, selfreducibility, interactive proofs, program checking, sparse sets,
Threshold values of Random KSAT from the cavity method, Random Structures Algorithms 28
, 2006
"... ..."
Learning with Restricted Focus of Attention
, 1997
"... We consider learning tasks in which the learner faces restrictions on the amount of information he can extract from each example he encounters. We introduce a formal framework for the analysis of such scenarios. We call it RFA (Restricted Focus of Attention) learning. While being a natural refine ..."
Abstract

Cited by 33 (2 self)
 Add to MetaCart
We consider learning tasks in which the learner faces restrictions on the amount of information he can extract from each example he encounters. We introduce a formal framework for the analysis of such scenarios. We call it RFA (Restricted Focus of Attention) learning. While being a natural refinement of the PAC learning model, some of the fundamental PAClearning results and techniques fail in the RFA paradigm; learnability in the RFA model is no longer characterized by the VC dimension, and many PAC learning algorithms are not applicable in the RFA setting. Hence, the RFA formulation reflects the need for new techniques and tools to cope with some fundamental constraints of realistic learning problems. In this work we also present some paradigms and algorithms that may serve as a first step towards answering this need. Two main types of restrictions are considered here  in the stronger one, called kRFA, only k of the n attributes of each example are revealed to the learner, while in the weakest one, called kwRFA, the restriction is made on the size of each observation (k bits), and no restriction is made on how the observations are extracted from the examples. For the stronger kRFA restriction we develop a general technique for composing efficient kRFA algorithms, and apply it to deduce, for instance, the efficient kRFA learnability of kDNF formulas, and the efficient 1RFA learnability of axisaligned rectangles in the Euclidean space R n . We also prove the kRFA learnability of richer classes of Boolean functions (such as kdecision lists) with respect to a given distribution, and the efficient (n \Gamma 1)RFA learnability (for fixed n), under product distributions, of classes of subsets of R n which are defined by mild surfaces. ...
Averagecase computational complexity theory
 Complexity Theory Retrospective II
, 1997
"... ABSTRACT Being NPcomplete has been widely interpreted as being computationally intractable. But NPcompleteness is a worstcase concept. Some NPcomplete problems are \easy on average", but some may not be. How is one to know whether an NPcomplete problem is \di cult on average"? ..."
Abstract

Cited by 31 (2 self)
 Add to MetaCart
ABSTRACT Being NPcomplete has been widely interpreted as being computationally intractable. But NPcompleteness is a worstcase concept. Some NPcomplete problems are \easy on average&quot;, but some may not be. How is one to know whether an NPcomplete problem is \di cult on average&quot;? The theory of averagecase computational complexity, initiated by Levin about ten years ago, is devoted to studying this problem. This paper is an attempt to provide an overview of the main ideas and results in this important new subarea of complexity theory. 1