Results 1  10
of
29
Easiness Assumptions and Hardness Tests: Trading Time for Zero Error
 Journal of Computer and System Sciences
, 2000
"... We propose a new approach towards derandomization in the uniform setting, where it is computationally hard to nd possible mistakes in the simulation of a given probabilistic algorithm. The approach consists in combining both easiness and hardness complexity assumptions: if a derandomization metho ..."
Abstract

Cited by 44 (2 self)
 Add to MetaCart
We propose a new approach towards derandomization in the uniform setting, where it is computationally hard to nd possible mistakes in the simulation of a given probabilistic algorithm. The approach consists in combining both easiness and hardness complexity assumptions: if a derandomization method based on an easiness assumption fails, then we obtain a certain hardness test that can be used to remove error in BPP algorithms. As an application, we prove that every RP algorithm can be simulated by a zeroerror probabilistic algorithm, running in expected subexponential time, that appears correct innitely often (i.o.) to every ecient adversary. A similar result by Impagliazzo and Wigderson (FOCS'98) states that BPP allows deterministic subexponentialtime simulations that appear correct with respect to any eciently sampleable distribution i.o., under the assumption that EXP 6= BPP; in contrast, our result does not rely on any unproven assumptions. As another application of our...
Power from Random Strings
 IN PROCEEDINGS OF THE 43RD IEEE SYMPOSIUM ON FOUNDATIONS OF COMPUTER SCIENCE
, 2002
"... We show that sets consisting of strings of high Kolmogorov complexity provide examples of sets that are complete for several complexity classes under probabilistic and nonuniform reductions. These sets are provably not complete under the usual manyone reductions. Let ..."
Abstract

Cited by 37 (15 self)
 Add to MetaCart
We show that sets consisting of strings of high Kolmogorov complexity provide examples of sets that are complete for several complexity classes under probabilistic and nonuniform reductions. These sets are provably not complete under the usual manyone reductions. Let
NPcomplete problems and physical reality
 ACM SIGACT News Complexity Theory Column, March. ECCC
, 2005
"... Can NPcomplete problems be solved efficiently in the physical universe? I survey proposals including soap bubbles, protein folding, quantum computing, quantum advice, quantum adiabatic algorithms, quantummechanical nonlinearities, hidden variables, relativistic time dilation, analog computing, Mal ..."
Abstract

Cited by 33 (5 self)
 Add to MetaCart
Can NPcomplete problems be solved efficiently in the physical universe? I survey proposals including soap bubbles, protein folding, quantum computing, quantum advice, quantum adiabatic algorithms, quantummechanical nonlinearities, hidden variables, relativistic time dilation, analog computing, MalamentHogarth spacetimes, quantum gravity, closed timelike curves, and “anthropic computing. ” The section on soap bubbles even includes some “experimental ” results. While I do not believe that any of the proposals will let us solve NPcomplete problems efficiently, I argue that by studying them, we can learn something not only about computation but also about physics. 1
When Worlds Collide: Derandomization, Lower Bounds, and Kolmogorov Complexity
 OF REDUCTIONS,IN“PROC.29THACM SYMPOSIUM ON THEORY OF COMPUTING
, 1997
"... This paper has the following goals:  To survey some of the recent developments in the field of derandomization.  To introduce a new notion of timebounded Kolmogorov complexity (KT), and show that it provides a useful tool for understanding advances in derandomization, and for putting vario ..."
Abstract

Cited by 20 (5 self)
 Add to MetaCart
This paper has the following goals:  To survey some of the recent developments in the field of derandomization.  To introduce a new notion of timebounded Kolmogorov complexity (KT), and show that it provides a useful tool for understanding advances in derandomization, and for putting various results in context.  To illustrate the usefulness of KT, by answering a question that has been posed in the literature, and  To pose some promising directions for future research.
Improving Exhaustive Search Implies Superpolynomial Lower Bounds
, 2009
"... The P vs NP problem arose from the question of whether exhaustive search is necessary for problems with short verifiable solutions. We do not know if even a slight algorithmic improvement over exhaustive search is universally possible for all NP problems, and to date no major consequences have been ..."
Abstract

Cited by 18 (4 self)
 Add to MetaCart
The P vs NP problem arose from the question of whether exhaustive search is necessary for problems with short verifiable solutions. We do not know if even a slight algorithmic improvement over exhaustive search is universally possible for all NP problems, and to date no major consequences have been derived from the assumption that an improvement exists. We show that there are natural NP and BPP problems for which minor algorithmic improvements over the trivial deterministic simulation already entail lower bounds such as NEXP ̸ ⊆ P/poly and LOGSPACE ̸ = NP. These results are especially interesting given that similar improvements have been found for many other hard problems. Optimistically, one might hope our results suggest a new path to lower bounds; pessimistically, they show that carrying out the seemingly modest program of finding slightly better algorithms for all search problems may be extremely difficult (if not impossible). We also prove unconditional superpolynomial timespace lower bounds for improving on exhaustive search: there is a problem verifiable with k(n) length witnesses in O(n a) time (for some a and some function k(n) ≤ n) that cannot be solved in k(n) c n a+o(1) time and k(n) c n o(1) space, for every c ≥ 1. While such problems can always be solved by exhaustive search in O(2 k(n) n a) time and O(k(n) + n a) space, we can prove a superpolynomial lower bound in the parameter k(n) when space usage is restricted.
Derandomization and Distinguishing Complexity
, 2003
"... We continue an investigation of resourcebounded Kolmogorov complexity and derandomization techniques begun in [2, 3]. ..."
Abstract

Cited by 11 (6 self)
 Add to MetaCart
We continue an investigation of resourcebounded Kolmogorov complexity and derandomization techniques begun in [2, 3].
Minimizing DNF Formulas and AC^0 Circuits Given a Truth Table
 IN PROCEEDINGS OF THE 21ST ANNUAL IEEE CONFERENCE ON COMPUTATIONAL COMPLEXITY
, 2006
"... For circuit classes R, the fundamental computational problem MinR asks for the minimum Rsize of a Boolean function presented as a truth table. Prominent examples of this problem include MinDNF, which asks whether a given Boolean function presented as a truth table has a kterm DNF, and MinCircu ..."
Abstract

Cited by 10 (0 self)
 Add to MetaCart
For circuit classes R, the fundamental computational problem MinR asks for the minimum Rsize of a Boolean function presented as a truth table. Prominent examples of this problem include MinDNF, which asks whether a given Boolean function presented as a truth table has a kterm DNF, and MinCircuit (also called MCSP), which asks whether a Boolean function presented as a truth table has a size k Boolean circuit. We present a new reduction proving that MinDNF is NPcomplete. It is significantly simpler than the known reduction of Masek [30], which is from CircuitSAT. We then give a more complex reduction, yielding the result that MinDNF cannot be approximated to within a factor smaller than (logN) γ, for some constant γ> 0, assuming that NP is not contained in quasipolynomial time. The standard greedy algorithm for Set Cover is often used in practice to approximate MinDNF. The question of whether MinDNF can be approximated to within a factor of o(logN) remains open, but we construct an instance of MinDNF on which the solution produced by the greedy algorithm is Ω(logN) larger than optimal. Finally, we turn to the question of approximating circuit size for slightly more general classes of circuits. DNF formulas are depth two circuits of AND and OR gates. Depth d circuits are denoted by AC0 d. We show that it is hard to approximate the size of AC0 d circuits (for large enough d) under cryptographic assumptions.
Complexity of twolevel logic minimization
 IEEE Transactions on ComputerAided Design of Integrated Circuits and Systems
"... Abstract—The complexity of twolevel logic minimization is a topic of interest to both computeraided design (CAD) specialists and computer science theoreticians. In the logic synthesis community, twolevel logic minimization forms the foundation for more complex optimization procedures that have si ..."
Abstract

Cited by 9 (0 self)
 Add to MetaCart
Abstract—The complexity of twolevel logic minimization is a topic of interest to both computeraided design (CAD) specialists and computer science theoreticians. In the logic synthesis community, twolevel logic minimization forms the foundation for more complex optimization procedures that have significant realworld impact. At the same time, the computational complexity of twolevel logic minimization has posed challenges since the beginning of the field in the 1960s; indeed, some central questions have been resolved only within the last few years, and others remain open. This recent activity has classified some logic optimization problems of high practical relevance, such as finding the minimal sumofproducts (SOP) form and maximal term expansion and reduction. This paper surveys progress in the field with selfcontained expositions of fundamental early results, an account of the recent advances, and some new classifications. It includes an introduction to the relevant concepts and terminology from computational complexity, as well a discussion of the major remaining open problems in the complexity of logic minimization. Index Terms—Computational complexity, logic design, logic minimization, twolevel logic. I.
MAking Hard Problems Harder
, 2005
"... We present a general approach to the hoary problem of (im)proving circuit lower bounds. We define notions of hardness condensing and hardness extraction, in analogy to the corresponding notions from the computational theory of randomness. A hardness condenser is a procedure that takes in a Boolean f ..."
Abstract

Cited by 8 (0 self)
 Add to MetaCart
We present a general approach to the hoary problem of (im)proving circuit lower bounds. We define notions of hardness condensing and hardness extraction, in analogy to the corresponding notions from the computational theory of randomness. A hardness condenser is a procedure that takes in a Boolean function as input, as well as an advice string, and outputs a Boolean function on a smaller number of bits which has greater hardness when measured in terms of input length. A hardness extractor takes in a Boolean function as input, as well as an advice string, and outputs a Boolean function defined on a smaller number of bits which has close to maximum hardness. We prove several positive and negative results about these objects. First, we observe that hardnessbased pseudorandom generators can be used to extract deterministic hardness from nondeterministic hardness. We derive several consequences of this observation. Among other results, we show that if E has exponential nondeterministic hardness, then E with linear advice has close to maximum deterministic hardness. We demonstrate a rare downward closure result: there is δ> 0 such that E with subexponential advice is contained in nonuniform space 2 δn if and only if there is k> 0 such that P with quadratic advice can be approximated in nonuniform space n k. Next, we consider limitations on natural models of hardness condensing and extraction. We show lower bounds on the length of the advice required for hardness condensing in a very general model of “relativizing ” condensers. We show that nontrivial blackbox extraction of deterministic hardness from deterministic hardness is essentially impossible. Finally, we prove positive results on hardness condensing in certain special cases. We show how to condense hardness from a biased function without any advice, using a hashing technique. We also give a hardness condenser without advice from averagecase hardness to worstcase hardness. Our technique involves a connection between hardness condensing and certain kinds of explicit constructions of covering codes.