Results 1  10
of
10
In Search of an Easy Witness: Exponential Time vs. Probabilistic Polynomial Time
"... Restricting the search space f0; 1g to the set of truth tables of "easy" Boolean functions on log n variables, as well as using some known hardnessrandomness tradeoffs, we establish a number of results relating the complexity of exponentialtime and probabilistic polynomialtime complexity cla ..."
Abstract

Cited by 55 (5 self)
 Add to MetaCart
Restricting the search space f0; 1g to the set of truth tables of "easy" Boolean functions on log n variables, as well as using some known hardnessrandomness tradeoffs, we establish a number of results relating the complexity of exponentialtime and probabilistic polynomialtime complexity classes. In particular, we show that NEXP ae P=poly , NEXP = MA; this can be interpreted as saying that no derandomization of MA (and, hence, of promiseBPP) is possible unless NEXP contains a hard Boolean function. We also prove several downward closure results for ZPP, RP, BPP, and MA; e.g., we show EXP = BPP , EE = BPE, where EE is the doubleexponential time class and BPE is the exponentialtime analogue of BPP.
When Worlds Collide: Derandomization, Lower Bounds, and Kolmogorov Complexity
 OF REDUCTIONS,IN“PROC.29THACM SYMPOSIUM ON THEORY OF COMPUTING
, 1997
"... This paper has the following goals:  To survey some of the recent developments in the field of derandomization.  To introduce a new notion of timebounded Kolmogorov complexity (KT), and show that it provides a useful tool for understanding advances in derandomization, and for putting vario ..."
Abstract

Cited by 18 (5 self)
 Add to MetaCart
This paper has the following goals:  To survey some of the recent developments in the field of derandomization.  To introduce a new notion of timebounded Kolmogorov complexity (KT), and show that it provides a useful tool for understanding advances in derandomization, and for putting various results in context.  To illustrate the usefulness of KT, by answering a question that has been posed in the literature, and  To pose some promising directions for future research.
Hardness as randomness: A survey of universal derandomization
 in Proceedings of the International Congress of Mathematicians
, 2002
"... We survey recent developments in the study of probabilistic complexity classes. While the evidence seems to support the conjecture that probabilism can be deterministically simulated with relatively low overhead, i.e., that P = BPP, it also indicates that this may be a difficult question to resolve. ..."
Abstract

Cited by 11 (5 self)
 Add to MetaCart
We survey recent developments in the study of probabilistic complexity classes. While the evidence seems to support the conjecture that probabilism can be deterministically simulated with relatively low overhead, i.e., that P = BPP, it also indicates that this may be a difficult question to resolve. In fact, proving that probalistic algorithms have nontrivial deterministic simulations is basically equivalent to proving circuit lower bounds, either in the algebraic or Boolean models.
The Pervasive Reach of ResourceBounded Kolmogorov Complexity in Computational Complexity Theory
"... We continue an investigation into resourcebounded Kolmogorov complexity [ABK + 06], which highlights the close connections between circuit complexity and Levin’s timebounded Kolmogorov complexity measure Kt (and other measures with a similar flavor), and also exploits derandomization techniques to ..."
Abstract

Cited by 6 (1 self)
 Add to MetaCart
We continue an investigation into resourcebounded Kolmogorov complexity [ABK + 06], which highlights the close connections between circuit complexity and Levin’s timebounded Kolmogorov complexity measure Kt (and other measures with a similar flavor), and also exploits derandomization techniques to provide new insights regarding Kolmogorov complexity. The Kolmogorov measures that have been introduced have many advantages over other approaches to defining resourcebounded Kolmogorov complexity (such as much greater independence from the underlying choice of universal machine that is used to define the measure) [ABK + 06]. Here, we study the properties of other measures that arise naturally in this framework. The motivation for introducing yet more notions of resourcebounded Kolmogorov complexity are twofold: • to demonstrate that other complexity measures such as branchingprogram size and formula size can also be discussed in terms of Kolmogorov complexity, and • to demonstrate that notions such as nondeterministic Kolmogorov complexity and distinguishing complexity [BFL02] also fit well into this framework. The main theorems that we provide using this new approach to resourcebounded Kolmogorov complexity are: • A complete set (RKNt) for NEXP/poly defined in terms of strings of high Kolmogorov complexity.
Relative to P promiseBPP equals APP
"... We show that for determinictic polynomial time computation, oracle access to APP, the class of real functions approximable by probabilistic Turing machines, is the same as having oracle access to promiseBPP. First we construct a mapping that maps every function in APP to a promise problem in prBPP, ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
We show that for determinictic polynomial time computation, oracle access to APP, the class of real functions approximable by probabilistic Turing machines, is the same as having oracle access to promiseBPP. First we construct a mapping that maps every function in APP to a promise problem in prBPP, and that maps complete functions to complete promise problems. Then we show an analogue result in the opposite direction, by constructing a mapping from prBPP into APP, that maps every promise problem to a function in APP, and mapping complete promise problems to complete functions. Second we prove that P APP = P prBPP . Finally we use our results to simplify proofs of important results on APP, such as the APPcompleteness of the function fCAPP that approximates the acceptance probability of a Boolean circuit, or the possibility (similarily to the case of BPP) to reduce the error probability for APP functions, or the conditionnal derandomization result APP = AP iff prBPP is easy.
The Computational Complexity Column
"... this article mention all of the amazing research in computational complexity theory. We survey various areas in complexity choosing papers more for their historical value than necessarily the importance of the results. We hope that this gives an insight into the richness and depth of this still quit ..."
Abstract
 Add to MetaCart
this article mention all of the amazing research in computational complexity theory. We survey various areas in complexity choosing papers more for their historical value than necessarily the importance of the results. We hope that this gives an insight into the richness and depth of this still quite young eld
Comparing Notions of Full Derandomization
 In Proceedings of the Sixteenth Annual IEEE Conference on Computational Complexity
, 2001
"... Most of the hypotheses of full derandomization fall into two sets of equivalent statements: Those equivalent to the existence of ecient pseudorandom generators and those equivalent to approximating the accepting probability of a circuit. We give the rst relativized world where these sets of equival ..."
Abstract
 Add to MetaCart
Most of the hypotheses of full derandomization fall into two sets of equivalent statements: Those equivalent to the existence of ecient pseudorandom generators and those equivalent to approximating the accepting probability of a circuit. We give the rst relativized world where these sets of equivalent statements are not equivalent to each other.
Approximate counting in bounded arithmetic
, 2007
"... We develop approximate counting of sets definable by Boolean circuits in bounded arithmetic using the dual weak pigeonhole principle (dWPHP(P V)), as a generalization of results from [15]. We discuss applications to formalization of randomized complexity classes (such as BPP, APP, MA, AM) in P V1 + ..."
Abstract
 Add to MetaCart
We develop approximate counting of sets definable by Boolean circuits in bounded arithmetic using the dual weak pigeonhole principle (dWPHP(P V)), as a generalization of results from [15]. We discuss applications to formalization of randomized complexity classes (such as BPP, APP, MA, AM) in P V1 + dWPHP(P V).
Improving Exhaustive Search Implies
, 2010
"... The P vs NP problem arose from the question of whether exhaustive search is necessary for problems with short verifiable solutions. We do not know if even a slight algorithmic improvement over exhaustive search is universally possible for all NP problems, and to date no major consequences have been ..."
Abstract
 Add to MetaCart
The P vs NP problem arose from the question of whether exhaustive search is necessary for problems with short verifiable solutions. We do not know if even a slight algorithmic improvement over exhaustive search is universally possible for all NP problems, and to date no major consequences have been derived from the assumption that an improvement exists. We show that there are natural NP and BPP problems for which minor algorithmic improvements over the trivial deterministic simulation already entail lower bounds such as NEXP ̸ ⊆ P/poly and LOGSPACE ̸ = NP. These results are especially interesting given that similar improvements have been found for many other hard problems. Optimistically, one might hope our results suggest a new path to lower bounds; pessimistically, they show that carrying out the seemingly modest program of finding slightly better algorithms for all search problems may be extremely difficult (if not impossible). We also prove unconditional superpolynomial timespace lower bounds for improving on exhaustive search: there is a problem verifiable with k(n) length witnesses in O(n a) time (for some a and some function k(n) ≤ n) that cannot be solved in k(n) c n a+o(1) time and k(n) c n o(1) space, for every c ≥ 1. While such problems can always be solved by exhaustive search in O(2 k(n) n a) time and O(k(n) + n a) space, we can prove a superpolynomial lower bound in the parameter k(n) when space usage is restricted.
The Computational Complexity of Randomness
, 2013
"... This dissertation explores the multifaceted interplay between efficient computation andprobability distributions. We organize the aspects of this interplay according to whether the randomness occurs primarily at the level of the problem or the level of the algorithm, and orthogonally according to wh ..."
Abstract
 Add to MetaCart
This dissertation explores the multifaceted interplay between efficient computation andprobability distributions. We organize the aspects of this interplay according to whether the randomness occurs primarily at the level of the problem or the level of the algorithm, and orthogonally according to whether the output is random or the input is random. Part I concerns settings where the problem’s output is random. A sampling problem associates to each input x a probability distribution D(x), and the goal is to output a sample from D(x) (or at least get statistically close) when given x. Although sampling algorithms are fundamental tools in statistical physics, combinatorial optimization, and cryptography, and algorithms for a wide variety of sampling problems have been discovered, there has been comparatively little research viewing sampling throughthelens ofcomputational complexity. We contribute to the understanding of the power and limitations of efficient sampling by proving a time hierarchy theorem which shows, roughly, that “a little more time gives a lot more power to sampling algorithms.” Part II concerns settings where the algorithm’s output is random. Even when the specificationofacomputational problem involves no randomness, onecanstill consider randomized