Results 1  10
of
32
NPcomplete problems and physical reality
 ACM SIGACT News Complexity Theory Column, March. ECCC
, 2005
"... Can NPcomplete problems be solved efficiently in the physical universe? I survey proposals including soap bubbles, protein folding, quantum computing, quantum advice, quantum adiabatic algorithms, quantummechanical nonlinearities, hidden variables, relativistic time dilation, analog computing, Mal ..."
Abstract

Cited by 53 (5 self)
 Add to MetaCart
(Show Context)
Can NPcomplete problems be solved efficiently in the physical universe? I survey proposals including soap bubbles, protein folding, quantum computing, quantum advice, quantum adiabatic algorithms, quantummechanical nonlinearities, hidden variables, relativistic time dilation, analog computing, MalamentHogarth spacetimes, quantum gravity, closed timelike curves, and “anthropic computing. ” The section on soap bubbles even includes some “experimental ” results. While I do not believe that any of the proposals will let us solve NPcomplete problems efficiently, I argue that by studying them, we can learn something not only about computation but also about physics. 1
Power from Random Strings
 IN PROCEEDINGS OF THE 43RD IEEE SYMPOSIUM ON FOUNDATIONS OF COMPUTER SCIENCE
, 2002
"... We show that sets consisting of strings of high Kolmogorov complexity provide examples of sets that are complete for several complexity classes under probabilistic and nonuniform reductions. These sets are provably not complete under the usual manyone reductions. Let ..."
Abstract

Cited by 43 (17 self)
 Add to MetaCart
(Show Context)
We show that sets consisting of strings of high Kolmogorov complexity provide examples of sets that are complete for several complexity classes under probabilistic and nonuniform reductions. These sets are provably not complete under the usual manyone reductions. Let
Easiness Assumptions and Hardness Tests: Trading Time for Zero Error
 Journal of Computer and System Sciences
, 2000
"... We propose a new approach towards derandomization in the uniform setting, where it is computationally hard to nd possible mistakes in the simulation of a given probabilistic algorithm. The approach consists in combining both easiness and hardness complexity assumptions: if a derandomization metho ..."
Abstract

Cited by 40 (1 self)
 Add to MetaCart
We propose a new approach towards derandomization in the uniform setting, where it is computationally hard to nd possible mistakes in the simulation of a given probabilistic algorithm. The approach consists in combining both easiness and hardness complexity assumptions: if a derandomization method based on an easiness assumption fails, then we obtain a certain hardness test that can be used to remove error in BPP algorithms. As an application, we prove that every RP algorithm can be simulated by a zeroerror probabilistic algorithm, running in expected subexponential time, that appears correct innitely often (i.o.) to every ecient adversary. A similar result by Impagliazzo and Wigderson (FOCS'98) states that BPP allows deterministic subexponentialtime simulations that appear correct with respect to any eciently sampleable distribution i.o., under the assumption that EXP 6= BPP; in contrast, our result does not rely on any unproven assumptions. As another application of our...
Improving Exhaustive Search Implies Superpolynomial Lower Bounds
, 2009
"... The P vs NP problem arose from the question of whether exhaustive search is necessary for problems with short verifiable solutions. We do not know if even a slight algorithmic improvement over exhaustive search is universally possible for all NP problems, and to date no major consequences have been ..."
Abstract

Cited by 34 (6 self)
 Add to MetaCart
The P vs NP problem arose from the question of whether exhaustive search is necessary for problems with short verifiable solutions. We do not know if even a slight algorithmic improvement over exhaustive search is universally possible for all NP problems, and to date no major consequences have been derived from the assumption that an improvement exists. We show that there are natural NP and BPP problems for which minor algorithmic improvements over the trivial deterministic simulation already entail lower bounds such as NEXP ̸ ⊆ P/poly and LOGSPACE ̸ = NP. These results are especially interesting given that similar improvements have been found for many other hard problems. Optimistically, one might hope our results suggest a new path to lower bounds; pessimistically, they show that carrying out the seemingly modest program of finding slightly better algorithms for all search problems may be extremely difficult (if not impossible). We also prove unconditional superpolynomial timespace lower bounds for improving on exhaustive search: there is a problem verifiable with k(n) length witnesses in O(n a) time (for some a and some function k(n) ≤ n) that cannot be solved in k(n) c n a+o(1) time and k(n) c n o(1) space, for every c ≥ 1. While such problems can always be solved by exhaustive search in O(2 k(n) n a) time and O(k(n) + n a) space, we can prove a superpolynomial lower bound in the parameter k(n) when space usage is restricted.
When Worlds Collide: Derandomization, Lower Bounds, and Kolmogorov Complexity
 OF REDUCTIONS,IN“PROC.29THACM SYMPOSIUM ON THEORY OF COMPUTING
, 1997
"... This paper has the following goals:  To survey some of the recent developments in the field of derandomization.  To introduce a new notion of timebounded Kolmogorov complexity (KT), and show that it provides a useful tool for understanding advances in derandomization, and for putting vario ..."
Abstract

Cited by 17 (5 self)
 Add to MetaCart
This paper has the following goals:  To survey some of the recent developments in the field of derandomization.  To introduce a new notion of timebounded Kolmogorov complexity (KT), and show that it provides a useful tool for understanding advances in derandomization, and for putting various results in context.  To illustrate the usefulness of KT, by answering a question that has been posed in the literature, and  To pose some promising directions for future research.
Complexity of twolevel logic minimization
 IEEE Transactions on ComputerAided Design of Integrated Circuits and Systems
"... Abstract—The complexity of twolevel logic minimization is a topic of interest to both computeraided design (CAD) specialists and computer science theoreticians. In the logic synthesis community, twolevel logic minimization forms the foundation for more complex optimization procedures that have si ..."
Abstract

Cited by 15 (1 self)
 Add to MetaCart
(Show Context)
Abstract—The complexity of twolevel logic minimization is a topic of interest to both computeraided design (CAD) specialists and computer science theoreticians. In the logic synthesis community, twolevel logic minimization forms the foundation for more complex optimization procedures that have significant realworld impact. At the same time, the computational complexity of twolevel logic minimization has posed challenges since the beginning of the field in the 1960s; indeed, some central questions have been resolved only within the last few years, and others remain open. This recent activity has classified some logic optimization problems of high practical relevance, such as finding the minimal sumofproducts (SOP) form and maximal term expansion and reduction. This paper surveys progress in the field with selfcontained expositions of fundamental early results, an account of the recent advances, and some new classifications. It includes an introduction to the relevant concepts and terminology from computational complexity, as well a discussion of the major remaining open problems in the complexity of logic minimization. Index Terms—Computational complexity, logic design, logic minimization, twolevel logic. I.
The complexity of boolean formula minimization
 Journal of Computer and Systems Sciences
"... The Minimum Equivalent Expression problem is a natural optimization problem in the second level of the PolynomialTime Hierarchy. It has long been conjectured to be Σ P 2complete and indeed appears as an open problem in Garey and Johnson [GJ79]. The depth2 variant was only shown to be Σ P 2comple ..."
Abstract

Cited by 12 (3 self)
 Add to MetaCart
(Show Context)
The Minimum Equivalent Expression problem is a natural optimization problem in the second level of the PolynomialTime Hierarchy. It has long been conjectured to be Σ P 2complete and indeed appears as an open problem in Garey and Johnson [GJ79]. The depth2 variant was only shown to be Σ P 2complete in 1998 [Uma98, Uma01], and even resolving the complexity of the depth3 version has been mentioned as a challenging open problem. We prove that the depthk version is Σ P 2complete under Turing reductions for all k ≥ 3. We also settle the complexity of the original, unbounded depth Minimum Equivalent Expression problem, by showing that it too is Σ P 2complete under Turing reductions. Supported by NSF CCF0830787, and BSF 2004329.
Limits on the Computational Power of Random Strings
"... Let C(x) andK(x) denote plain and prefix Kolmogorov complexity, respectively, and let RC and RK denote the sets of strings that are “random ” according to these measures; both RK and RC are undecidable. Earlier work has shown that every set in NEXP is in NP relative to both RK and RC, and that every ..."
Abstract

Cited by 11 (5 self)
 Add to MetaCart
Let C(x) andK(x) denote plain and prefix Kolmogorov complexity, respectively, and let RC and RK denote the sets of strings that are “random ” according to these measures; both RK and RC are undecidable. Earlier work has shown that every set in NEXP is in NP relative to both RK and RC, and that every set in BPP is polynomialtime truthtable reducible to both RK and RC [ABK06a, BFKL10]. (All of these inclusions hold, no matter which “universal ” Turing machine one uses in the definitions of C(x) andK(x).) Since each machine U gives rise to a slightly different measure CU or KU, these inclusions can be stated as: • BPP ⊆ DEC ∩ ⋂ U
Minimizing DNF Formulas and AC^0 Circuits Given a Truth Table
 IN PROCEEDINGS OF THE 21ST ANNUAL IEEE CONFERENCE ON COMPUTATIONAL COMPLEXITY
, 2006
"... For circuit classes R, the fundamental computational problem MinR asks for the minimum Rsize of a Boolean function presented as a truth table. Prominent examples of this problem include MinDNF, which asks whether a given Boolean function presented as a truth table has a kterm DNF, and MinCircu ..."
Abstract

Cited by 10 (0 self)
 Add to MetaCart
(Show Context)
For circuit classes R, the fundamental computational problem MinR asks for the minimum Rsize of a Boolean function presented as a truth table. Prominent examples of this problem include MinDNF, which asks whether a given Boolean function presented as a truth table has a kterm DNF, and MinCircuit (also called MCSP), which asks whether a Boolean function presented as a truth table has a size k Boolean circuit. We present a new reduction proving that MinDNF is NPcomplete. It is significantly simpler than the known reduction of Masek [30], which is from CircuitSAT. We then give a more complex reduction, yielding the result that MinDNF cannot be approximated to within a factor smaller than (logN) γ, for some constant γ> 0, assuming that NP is not contained in quasipolynomial time. The standard greedy algorithm for Set Cover is often used in practice to approximate MinDNF. The question of whether MinDNF can be approximated to within a factor of o(logN) remains open, but we construct an instance of MinDNF on which the solution produced by the greedy algorithm is Ω(logN) larger than optimal. Finally, we turn to the question of approximating circuit size for slightly more general classes of circuits. DNF formulas are depth two circuits of AND and OR gates. Depth d circuits are denoted by AC0 d. We show that it is hard to approximate the size of AC0 d circuits (for large enough d) under cryptographic assumptions.