Results 1  10
of
11
Algebrization: A new barrier in complexity theory
 MIT Theory of Computing Colloquium
, 2007
"... Any proof of P � = NP will have to overcome two barriers: relativization and natural proofs. Yet over the last decade, we have seen circuit lower bounds (for example, that PP does not have linearsize circuits) that overcome both barriers simultaneously. So the question arises of whether there is a ..."
Abstract

Cited by 30 (2 self)
 Add to MetaCart
Any proof of P � = NP will have to overcome two barriers: relativization and natural proofs. Yet over the last decade, we have seen circuit lower bounds (for example, that PP does not have linearsize circuits) that overcome both barriers simultaneously. So the question arises of whether there is a third barrier to progress on the central questions in complexity theory. In this paper we present such a barrier, which we call algebraic relativization or algebrization. The idea is that, when we relativize some complexity class inclusion, we should give the simulating machine access not only to an oracle A, but also to a lowdegree extension of A over a finite field or ring. We systematically go through basic results and open problems in complexity theory to delineate the power of the new algebrization barrier. First, we show that all known nonrelativizing results based on arithmetization—both inclusions such as IP = PSPACE and MIP = NEXP, and separations such as MAEXP � ⊂ P/poly —do indeed algebrize. Second, we show that almost all of the major open problems—including P versus NP, P versus RP, and NEXP versus P/poly—will require nonalgebrizing techniques. In some cases algebrization seems to explain exactly why progress stopped where it did: for example, why we have superlinear circuit lower bounds for PromiseMA but not for NP. Our second set of results follows from lower bounds in a new model of algebraic query complexity, which we introduce in this paper and which is interesting in its own right. Some of our lower bounds use direct combinatorial and algebraic arguments, while others stem from a surprising connection between our model and communication complexity. Using this connection, we are also able to give an MAprotocol for the Inner Product function with O ( √ n log n) communication (essentially matching a lower bound of Klauck), as well as a communication complexity conjecture whose truth would imply NL � = NP. 1
Derandomizing ArthurMerlin games and approximate counting implies exponentialsize lower bounds
 In Proceedings of the IEEE Conference on Computational Complexity
, 2010
"... Abstract. We show that if ArthurMerlin protocols can be derandomized, then there is a language computable in deterministic exponentialtime with access to an NP oracle, that requires circuits of exponential size. More formally, if every promise problem in prAM, the class of promise problems that hav ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
Abstract. We show that if ArthurMerlin protocols can be derandomized, then there is a language computable in deterministic exponentialtime with access to an NP oracle, that requires circuits of exponential size. More formally, if every promise problem in prAM, the class of promise problems that have ArthurMerlin protocols, can be computed by a deterministic polynomialtime algorithm with access to an NP oracle then there is a language in ENP that requires circuits of size Ω(2n /n). The lower bound in the conclusion of our theorem suffices to construct pseudorandom generators with exponential stretch. We also show that the same conclusion holds if the following two related problems can be computed in polynomial time with access to an NPoracle: (i) approximately counting the number of accepted inputs of a circuit, up to multiplicative factors; and (ii) recognizing an approximate lower bound on the number of accepted inputs of a circuit, up to multiplicative factors.
An Axiomatic Approach to Algebrization
"... Nonrelativization of complexity issues can be interpreted as giving some evidence that these issues cannot be resolved by “blackbox ” techniques. In the early 1990’s, a sequence of important nonrelativizing results was proved, mainly using algebraic techniques. Two approaches have been proposed t ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
Nonrelativization of complexity issues can be interpreted as giving some evidence that these issues cannot be resolved by “blackbox ” techniques. In the early 1990’s, a sequence of important nonrelativizing results was proved, mainly using algebraic techniques. Two approaches have been proposed to understand the power and limitations of these algebraic techniques: (1) Fortnow [12] gives a construction of a class of oracles which have a similar algebraic and logical structure, although they are arbitrarily powerful. He shows that many of the nonrelativizing results proved using algebraic techniques hold for all such oracles, but he does not show, e.g., that the outcome of the “P vs. NP ” question differs between different oracles in that class. (2) Aaronson and Wigderson [1] give definitions of algebrizing separations and
Stronger Lower Bounds and RandomnessHardness TradeOffs Using Associated Algebraic Complexity Classes
"... We associate to each Boolean language complexity class C the algebraic class a·C consisting of families of polynomials {fn} for which the evaluation problem over Z is in C. We prove the following lower bound and randomnesstohardness results: 1. If polynomial identity testing (PIT) is in NSUBEXP th ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
We associate to each Boolean language complexity class C the algebraic class a·C consisting of families of polynomials {fn} for which the evaluation problem over Z is in C. We prove the following lower bound and randomnesstohardness results: 1. If polynomial identity testing (PIT) is in NSUBEXP then a·NEXP does not have poly size constantfree arithmetic circuits. 2. a·NEXP RP does not have poly size constantfree arithmetic circuits. 3. For every fixed k, a·MA does not have arithmetic circuits of size nk. Items 1 and 2 strengthen two results due to Kabanets and Impagliazzo [7]. The third item improves a lower bound due to Santhanam [11]. We consider the special case lowPIT of identity testing for (constantfree) arithmetic circuits with low formal degree, and give improved hardnesstorandomness tradeoffs that apply to this case. Combining our results for both directions of the hardnessrandomness connection, we demonstrate a case where derandomization of PIT and proving lower bounds are equivalent. Namely, we show that lowPIT ∈ i.oNTIME[2no(1)]/no(1) if and only if there exists a family of multilinear polynomials in a·NE/lin that requires constantfree arithmetic circuits of superpolynomial size and formal degree.
FINDING IRREFUTABLE CERTIFICATES FOR S p . . .
, 2008
"... We show that S p 2 ⊆ PprAM, where S p 2 is the symmetric alternation class and prAM refers to the promise version of the ArthurMerlin class AM. This is derived as a consequence of our main result that presents an FP prAM algorithm for finding a small set of “collectively irrefutable certificates ” ..."
Abstract
 Add to MetaCart
We show that S p 2 ⊆ PprAM, where S p 2 is the symmetric alternation class and prAM refers to the promise version of the ArthurMerlin class AM. This is derived as a consequence of our main result that presents an FP prAM algorithm for finding a small set of “collectively irrefutable certificates ” of a given S2type matrix. The main result also yields some new consequences of the hypothesis that NP has polynomial size circuits. It is known that the above hypothesis implies a collapse of the polynomial time hierarchy (PH) to S p 2 ⊆ ZPPNP [5, 14]. Under the same hypothesis, we show that PH collapses to P prMA. We also describe an FP prMA algorithm for learning polynomial size circuits for SAT, assuming such circuits exist. For the same problem, the previously best known result was a ZPP NP algorithm [4].
Northwestern University
, 1012
"... We define and study a new notion of “robust simulations ” between complexity classes which is intermediate between the traditional notions of infinitelyoften and almosteverywhere,as well as a corresponding notion of “significant separations”. A language L has a robust simulation in a complexity cl ..."
Abstract
 Add to MetaCart
We define and study a new notion of “robust simulations ” between complexity classes which is intermediate between the traditional notions of infinitelyoften and almosteverywhere,as well as a corresponding notion of “significant separations”. A language L has a robust simulation in a complexity class C if there is a language in C which agrees with L on arbitrarily large polynomial stretches of input lengths. There is a significant separation of L from C if there is no robust simulation of L ∈ C. The new notion of simulation is a cleaner and more natural notion of simulation than the infinitelyoften notion. We show that various implications in complexity theory such as the collapse of PH if NP = P and the KarpLipton theorem have analogues for robust simulations. We then use these results to prove that most known separations in complexity theory, such as hierarchy theorems, fixed polynomial circuit lower bounds, timespace tradeoffs, and the recent theorem of Williams, can be strengthened to significant separations, though in each case, an almost everywhere separation is unknown. Proving our results requires several new ideas, including a completely different proof of the
Geometric Complexity Theory V: Equivalence between blackbox derandomization of polynomial identity testing and derandomization of Noether’s Normalization Lemma (Abstract) Dedicated to Sri Ramakrishna
"... Abstract—It is shown that blackbox derandomization of polynomial identity testing (PIT) is essentially equivalent to derandomization of Noether’s Normalization Lemma for explicit algebraic varieties, the problem that lies at the heart of the foundational classification problem of algebraic geometry ..."
Abstract
 Add to MetaCart
Abstract—It is shown that blackbox derandomization of polynomial identity testing (PIT) is essentially equivalent to derandomization of Noether’s Normalization Lemma for explicit algebraic varieties, the problem that lies at the heart of the foundational classification problem of algebraic geometry. Specifically: (1) It is shown that in characteristic zero blackbox derandomization of the symbolic trace identity testing (STIT) brings the problem of derandomizing Noether’s Normalization Lemma for the ring of invariants of the adjoint action of the general linear group on a tuple of matrices from EXPSPACE (where it is currently) to P. Next it is shown that assuming the Generalized Riemann Hypothesis (GRH), instead of the blackbox derandomization hypothesis, brings the problem from EXPSPACE to quasiPH, instead of P. Thus blackbox derandomization
c §
, 2008
"... This paper introduces quantum “multipleMerlin”Arthur proof systems in which Arthur receives multiple quantum proofs that are unentangled with each other. Although classical multiproof systems are obviously equivalent to classical singleproof systems (i.e., usual MerlinArthur proof systems), it ..."
Abstract
 Add to MetaCart
This paper introduces quantum “multipleMerlin”Arthur proof systems in which Arthur receives multiple quantum proofs that are unentangled with each other. Although classical multiproof systems are obviously equivalent to classical singleproof systems (i.e., usual MerlinArthur proof systems), it is unclear whether or not quantum multiproof systems collapse to quantum singleproof systems (i.e., usual quantum MerlinArthur proof systems). This paper presents a necessary and sufficient condition under which the number of quantum proofs is reducible to two. It is also proved that, in the case of perfect soundness, using multiple quantum proofs
6.841: Advanced Complexity Theory Fall 2012
, 2012
"... a relatively weak conclusion. Can we prove P = NP with similar techniques? Last week we showed NT IME(n) � T ISP (n 1.2, n 0.2). However, we see that this is We will show that such techniques alone cannot prove NP = P or NP � P. How does a standard complexity proof work? Take some Turing machine M ..."
Abstract
 Add to MetaCart
a relatively weak conclusion. Can we prove P = NP with similar techniques? Last week we showed NT IME(n) � T ISP (n 1.2, n 0.2). However, we see that this is We will show that such techniques alone cannot prove NP = P or NP � P. How does a standard complexity proof work? Take some Turing machine M1 that recognizes a language L.... Now let M2 return the inverse of M1.... Now let M3 simulate M2 on part of the input and do something else with another part.... Now let M4 add a quantifier... And so on. These proofs use blackboxes; for instance with no knowledge of M1, if M2 uses M1 and eventually Mi cannot exist, M1 could not have existed. Figure 1: Nested blackbox Turing machines are like Matryoska dolls. These sorts of techniques are extremely useful: • Hierarchy theorems are proven this way • Timespace lower bounds are proven this way • Theorem 3.4: Ladners Theorem[1], which says that if P = NP, then there are many languages in NP \ P that are not NPcomplete, is proven this way 1 •... Many other examples These techniques will necessarily fail. We will identify one thing every such proof must
Unions of Disjoint NPComplete Sets
"... Abstract. We study the following question: if A and B are disjoint NPcomplete sets, then is A ∪ B NPcomplete? We provide necessary and sufficient conditions under which the union of disjoint NPcomplete sets remain complete. 1 ..."
Abstract
 Add to MetaCart
Abstract. We study the following question: if A and B are disjoint NPcomplete sets, then is A ∪ B NPcomplete? We provide necessary and sufficient conditions under which the union of disjoint NPcomplete sets remain complete. 1