Results 1 
6 of
6
Limitations of Hardness vs. Randomness under Uniform Reductions
 ELECTRONIC COLLOQUIUM ON COMPUTATIONAL COMPLEXITY
, 2008
"... We consider (uniform) reductions from computing a function f to the task of distinguishing the output of some pseudorandom generator G from uniform. Impagliazzo and Wigderson [IW] and Trevisan and Vadhan [TV] exhibited such reductions for every function f in PSPACE. Moreover, their reductions are “b ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
(Show Context)
We consider (uniform) reductions from computing a function f to the task of distinguishing the output of some pseudorandom generator G from uniform. Impagliazzo and Wigderson [IW] and Trevisan and Vadhan [TV] exhibited such reductions for every function f in PSPACE. Moreover, their reductions are “black box,” showing how to use any distinguisher T, given as oracle, in order to compute f (regardless of the complexity of T). The reductions are also adaptive, but only mildly (queries of the same length do not occur in different levels of adaptivity). Impagliazzo and Wigderson [IW] also exhibited such reductions for every function f in EXP, but those reductions are not blackbox, because they only work when the oracle T is computable by small circuits. Our main results are that: • Nonadaptive blackbox reductions as above can only exist for functions f in BPP NP (and thus are unlikely to exist for all of PSPACE). • Mildly adaptive blackbox reductions as above can only exist for functions f in PSPACE (and thus are unlikely to exist for all of EXP).
Lower bounds on the query complexity of nonuniform and adaptive reductions showing hardness amplification
, 2012
"... Hardness amplification results show that for every Boolean function f there exists a Boolean function Amp(f) such that the following holds: if every circuit of size s computes f correctly on at most a 1 − δ fraction of inputs, then every circuit of size s ′ computes Amp(f) correctly on at most a 1/2 ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
Hardness amplification results show that for every Boolean function f there exists a Boolean function Amp(f) such that the following holds: if every circuit of size s computes f correctly on at most a 1 − δ fraction of inputs, then every circuit of size s ′ computes Amp(f) correctly on at most a 1/2+ϵ fraction of inputs. All hardness amplification results in the literature suffer from “size loss ” meaning that s ′ ≤ ϵ · s. In this paper we show that proofs using “nonuniform reductions ” must suffer from such size loss. To the best of our knowledge, all proofs in the literature are by nonuniform reductions. Our result is the first lower bound that applies to nonuniform reductions that are adaptive. A reduction is an oracle circuit R (·) such that when given oracle access to any function D that computes Amp(f) correctly on a 1/2 + ϵ fraction of inputs, R D computes f correctly on a 1 − δ fraction of inputs. A nonuniform reduction is allowed to also receive a short advice string that may depend on both f and D in an arbitrary way. The well known connection between hardness amplification and listdecodable errorcorrecting codes implies that reductions showing hardness amplification cannot be uniform for δ, ϵ < 1/4. A reduction is nonadaptive if it makes nonadaptive queries to its oracle. Shaltiel and Viola (SICOMP 2010) showed lower bounds on the number of queries made by nonuniform
Relativized Worlds Without WorstCase to AverageCase Reductions for NP
, 2010
"... We prove that relative to an oracle, there is no worstcase to averagecase reduction for NP. We also handle classes that are somewhat larger than NP, as well as worstcase to errorlessaveragecase reductions. In fact, we prove that relative to an oracle, there is no worstcase. We also handle reduc ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
(Show Context)
We prove that relative to an oracle, there is no worstcase to averagecase reduction for NP. We also handle classes that are somewhat larger than NP, as well as worstcase to errorlessaveragecase reductions. In fact, we prove that relative to an oracle, there is no worstcase. We also handle reductions from NP to the polynomialtime hierarchy and beyond, under restrictions on the number of queries the reductions can make. to errorlessaveragecase reduction from NP to BPP NP 1
Hard instances for satisfiability and quasioneway functions
, 2009
"... We give an efficient algorithm that takes as input any (probabilistic) polynomial time algorithm A which purports to solve SAT and finds, for infinitely many input lengths, SAT formulas φ and witnesses w such that A claims φ is unsatisfiable, but w is a satisfying assignment for φ (assuming NP ̸ ⊆ R ..."
Abstract
 Add to MetaCart
(Show Context)
We give an efficient algorithm that takes as input any (probabilistic) polynomial time algorithm A which purports to solve SAT and finds, for infinitely many input lengths, SAT formulas φ and witnesses w such that A claims φ is unsatisfiable, but w is a satisfying assignment for φ (assuming NP ̸ ⊆ RP). This solves an open problem posed in the work of Gutfreund, Shaltiel, and TaShma (CCC 2005). Following Gutfreund et al., we also extend this to give an efficient sampling algorithm (a “quasihard ” sampler) which generates hard instance/witness pairs for all algorithms running in some fixed polynomial time. We ask how our sampling algorithm relates to various cryptographic notions. We show that our sampling algorithm gives a simple construction of quasioneway functions, a weakened notion of standard oneway functions. We also investigate the possibility of obtaining pseudorandom generators from our quasioneway functions and show that a large class of reductions that work in the standard setting must fail.
from a PSamplable Distribution to the Uniform Distribution for NPSearch Problems
, 2008
"... Abstract. Impagliazzo and Levin demonstrated [IL90] that the averagecase hardness of any NPsearch problem under any Psamplable distribution implies that of another NPsearch problem under the uniform distribution. For this they developed a way to define a reduction from an NPsearch problem F wit ..."
Abstract
 Add to MetaCart
Abstract. Impagliazzo and Levin demonstrated [IL90] that the averagecase hardness of any NPsearch problem under any Psamplable distribution implies that of another NPsearch problem under the uniform distribution. For this they developed a way to define a reduction from an NPsearch problem F with “mild hardness ” under any Psamplable distribution H; more specifically, F is a problem with positive hard instances with probability 1/poly(n) under H. In this paper we show a similar reduction for an NPsearch problem F with “strong hardness”, that is, F with positive hard instances with probability 1 − 1/poly(n) under H in its positive domain (i.e., the set of positive instances). Our reduction defines from this pair of F and H, some NPsearch problem G with a similar hardness under the uniform distribution U; more precisely, (i) G has positive hard instances with probability 1 − 1/poly(n) under U in its positive domain, and (ii) the positive domain itself occupies 1/poly(n) of {0, 1} n. 1
Strong Hardness Preserving Reduction from a PSamplable Distribution to the Uniform Distribution for NPSearch Problems
"... The theory of the averagecase complexity has been studied extensively since 1970’s, and during late 80’s to early 90’s, several fundamental results have been shown (see an excellent survey [BT06b] for the background). Among such results, an averagecase NPcompleteness theorem shown by Impagliazzo ..."
Abstract
 Add to MetaCart
The theory of the averagecase complexity has been studied extensively since 1970’s, and during late 80’s to early 90’s, several fundamental results have been shown (see an excellent survey [BT06b] for the background). Among such results, an averagecase NPcompleteness theorem shown by Impagliazzo and Levin [IL90] is one of the seminal results in the averagecase com