Results 1  10
of
16
The quantitative structure of exponential time
 Complexity theory retrospective II
, 1997
"... ABSTRACT Recent results on the internal, measuretheoretic structure of the exponential time complexity classes E and EXP are surveyed. The measure structure of these classes is seen to interact in informative ways with biimmunity, complexity cores, polynomialtime reductions, completeness, circuit ..."
Abstract

Cited by 90 (13 self)
 Add to MetaCart
ABSTRACT Recent results on the internal, measuretheoretic structure of the exponential time complexity classes E and EXP are surveyed. The measure structure of these classes is seen to interact in informative ways with biimmunity, complexity cores, polynomialtime reductions, completeness, circuitsize complexity, Kolmogorov complexity, natural proofs, pseudorandom generators, the density of hard languages, randomized complexity, and lowness. Possible implications for the structure of NP are also discussed. 1
Pseudorandom Generators, Measure Theory, and Natural Proofs
, 1995
"... We prove that if strong pseudorandom number generators exist, then the class of languages that have polynomialsized circuits (P/poly) is not measurable within exponential time, in terms of the resourcebounded measure theory of Lutz. We prove our result by showing that if P/poly has measure zero in ..."
Abstract

Cited by 29 (4 self)
 Add to MetaCart
We prove that if strong pseudorandom number generators exist, then the class of languages that have polynomialsized circuits (P/poly) is not measurable within exponential time, in terms of the resourcebounded measure theory of Lutz. We prove our result by showing that if P/poly has measure zero in exponential time, then there is a natural proof against P/poly, in the terminology of Razborov and Rudich [25]. We also provide a partial converse of this result.
An Excursion to the Kolmogorov Random Strings
 In Proceedings of the 10th IEEE Structure in Complexity Theory Conference
, 1995
"... We study the set of resource bounded Kolmogorov random strings: R t = fx j K t (x) jxjg for t a time constructible function such that t(n) 2 n 2 and t(n) 2 2 n O(1) . We show that the class of sets that Turing reduce to R t has measure 0 in EXP with respect to the resourcebounded measure ..."
Abstract

Cited by 17 (8 self)
 Add to MetaCart
We study the set of resource bounded Kolmogorov random strings: R t = fx j K t (x) jxjg for t a time constructible function such that t(n) 2 n 2 and t(n) 2 2 n O(1) . We show that the class of sets that Turing reduce to R t has measure 0 in EXP with respect to the resourcebounded measure introduced by [17]. From this we conclude that R t is not Turingcomplete for EXP . This contrasts the resource unbounded setting. There R is Turingcomplete for coRE . We show that the class of sets to which R t bounded truthtable reduces, has p 2 measure 0 (therefore, measure 0 in EXP ). This answers an open question of Lutz, giving a natural example of a language that is not weaklycomplete for EXP and that reduces to a measure 0 class in EXP . It follows that the sets that are p btt hard for EXP have p 2 measure 0. 1 Introduction One of the main questions in complexity theory is the relation between complexity classes, such as for example P ; NP , and EXP . It is well known that ...
Weakly Complete Problems are Not Rare
 COMPUTATIONAL COMPLEXITY
, 1995
"... Certain natural decision problems are known to be intractable because they are complete for E, the class of all problems decidable in exponential time. Lutz recently conjectured that many other seemingly intractable problems are not complete for E, but are intractable nonetheless because they are we ..."
Abstract

Cited by 7 (2 self)
 Add to MetaCart
Certain natural decision problems are known to be intractable because they are complete for E, the class of all problems decidable in exponential time. Lutz recently conjectured that many other seemingly intractable problems are not complete for E, but are intractable nonetheless because they are weakly complete for E. The main result of this paper shows that Lutz's intuition is at least partially correct; many more problems are weakly complete for E than are complete for E. The main result of this paper states that weakly complete problems are not rare in the sense that they form a nonmeasure 0 subset of E. This extends a recent result of Lutz that establishes the existence of problems that are weakly complete, but not complete, for E. The proof of Lutz's original result employs a sophisticated martingale diagonalization argument. Here we simplify and extend Lutz's argument to prove the main result. This simplified martingale diagonalization argument may be applicable to other quest...
NPhard sets are superterse unless NP is small
 Information Processing Letters
, 1997
"... Introduction One of the important questions in computational complexity theory is whether every NP problem is solvable by polynomial time circuits, i.e., NP `?P=poly. Furthermore, it has been asked what the deterministic time complexity of NP is if NP ` P=poly. That is, if NP is easy in the nonunif ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
Introduction One of the important questions in computational complexity theory is whether every NP problem is solvable by polynomial time circuits, i.e., NP `?P=poly. Furthermore, it has been asked what the deterministic time complexity of NP is if NP ` P=poly. That is, if NP is easy in the nonuniform complexity measure, how easy is NP in the uniform complexity measure? Let P T (SPARSE) be the class of languages that are polynomial time Turing reducible to some sparse sets. Then it is well known that P T (SPARSE) = P=poly. Hence the above question is equivalent to the following question. NP `?PT (SPARSE): It has been shown by Wilson [18] that thi
How to Privatize Random Bits
 In Proceedings 10th International Conference on Parallel and Distributed Computing Systems
, 1996
"... The paper investigates the extent to which a public source of random bits can be used to obtain private random bits that can be safely used in cryptographic protocols. We consider two cases: (a) the case in which the part privatizing random bits is computationally more powerful than the adversary, a ..."
Abstract

Cited by 4 (3 self)
 Add to MetaCart
The paper investigates the extent to which a public source of random bits can be used to obtain private random bits that can be safely used in cryptographic protocols. We consider two cases: (a) the case in which the part privatizing random bits is computationally more powerful than the adversary, and (b) the case in which the part privatizing random bits has a small number of private random bits. The first case corresponds to randomized hard functions and the second variant corresponds to randomized pseudorandom generators. We show the existence of strong randomized hard functions and pseudorandom generators. As a side effect, it is shown that relative to a random oracle P=poly is not measurable in EXP in the resourcebounded theoretical sense and a very strong separation between sublinear time and AC 0 is obtained. Keywords: oneway function, pseudorandom generator, hard function. Supported in part by grant NSFCCR8957604, NSFINT9116781/JSPSENG207 and NSFCCR9322513. 1 Int...
The size of SPP
 Theoretical Computer Science
"... Derandomization techniques are used to show that at least one of the following holds regarding the size of the counting complexity class SPP. 1. µp(SPP) = 0. 2. PH ⊆ SPP. In other words, SPP is small by being a negligible subset of exponential time or large by containing the entire polynomialtime ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
Derandomization techniques are used to show that at least one of the following holds regarding the size of the counting complexity class SPP. 1. µp(SPP) = 0. 2. PH ⊆ SPP. In other words, SPP is small by being a negligible subset of exponential time or large by containing the entire polynomialtime hierarchy. This addresses an open problem about the complexity of the graph isomorphism problem: it is not weakly complete for exponential time unless PH is contained in SPP. It is also shown that the polynomialtime hierarchy is contained in SPP NP if NP does not have pmeasure 0. 1
OneWay Functions and Balanced NP
 Theoretical Computer Science
"... The existence of cryptographically secure oneway functions is related to the measure of a subclass of NP. This subclass, called fiNP ("balanced NP"), contains 3SAT and other standard NP problems. The hypothesis that fiNP is not a subset of P is equivalent to the P 6= NP conjecture. A stronger hypo ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
The existence of cryptographically secure oneway functions is related to the measure of a subclass of NP. This subclass, called fiNP ("balanced NP"), contains 3SAT and other standard NP problems. The hypothesis that fiNP is not a subset of P is equivalent to the P 6= NP conjecture. A stronger hypothesis, that fiNP is not a measure 0 subset of E 2 = DTIME(2 polynomial ) is shown to have the following two consequences. 1. For every k, there is a polynomial time computable, honest function f that is (2 n k =n k )oneway with exponential security. (That is, no 2 n k timebounded algorithm with n k bits of nonuniform advice inverts f on more than an exponentially small set of inputs. ) 2. If DTIME(2 n ) "separates all BPP pairs," then there is a (polynomial time computable) pseudorandom generator that passes all probabilistic polynomialtime statistical tests. (This result is a partial converse of Yao, Boppana, and Hirschfeld's theorem, that the existence of pseudorandom ge...
AverageCase Complexity Theory and PolynomialTime Reductions
, 2001
"... This thesis studies averagecase complexity theory and polynomialtime reducibilities. The issues in averagecase complexity arise primarily from Cai and Selman's extension of Levin's denition of average polynomial time. We study polynomialtime reductions between distributional problems. Under stro ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
This thesis studies averagecase complexity theory and polynomialtime reducibilities. The issues in averagecase complexity arise primarily from Cai and Selman's extension of Levin's denition of average polynomial time. We study polynomialtime reductions between distributional problems. Under strong but reasonable hypotheses we separate ordinary NPcompleteness notions.