Results 1 
9 of
9
Algebrization: A new barrier in complexity theory
 MIT Theory of Computing Colloquium
, 2007
"... Any proof of P � = NP will have to overcome two barriers: relativization and natural proofs. Yet over the last decade, we have seen circuit lower bounds (for example, that PP does not have linearsize circuits) that overcome both barriers simultaneously. So the question arises of whether there is a ..."
Abstract

Cited by 30 (2 self)
 Add to MetaCart
Any proof of P � = NP will have to overcome two barriers: relativization and natural proofs. Yet over the last decade, we have seen circuit lower bounds (for example, that PP does not have linearsize circuits) that overcome both barriers simultaneously. So the question arises of whether there is a third barrier to progress on the central questions in complexity theory. In this paper we present such a barrier, which we call algebraic relativization or algebrization. The idea is that, when we relativize some complexity class inclusion, we should give the simulating machine access not only to an oracle A, but also to a lowdegree extension of A over a finite field or ring. We systematically go through basic results and open problems in complexity theory to delineate the power of the new algebrization barrier. First, we show that all known nonrelativizing results based on arithmetization—both inclusions such as IP = PSPACE and MIP = NEXP, and separations such as MAEXP � ⊂ P/poly —do indeed algebrize. Second, we show that almost all of the major open problems—including P versus NP, P versus RP, and NEXP versus P/poly—will require nonalgebrizing techniques. In some cases algebrization seems to explain exactly why progress stopped where it did: for example, why we have superlinear circuit lower bounds for PromiseMA but not for NP. Our second set of results follows from lower bounds in a new model of algebraic query complexity, which we introduce in this paper and which is interesting in its own right. Some of our lower bounds use direct combinatorial and algebraic arguments, while others stem from a surprising connection between our model and communication complexity. Using this connection, we are also able to give an MAprotocol for the Inner Product function with O ( √ n log n) communication (essentially matching a lower bound of Klauck), as well as a communication complexity conjecture whose truth would imply NL � = NP. 1
Amplifying lower bounds by means of selfreducibility
 In IEEE Conference on Computational Complexity
, 2008
"... We observe that many important computational problems in NC 1 share a simple selfreducibility property. We then show that, for any problem A having this selfreducibility property, A has polynomial size TC 0 circuits if and only if it has TC 0 circuits of size n 1+ɛ for every ɛ>0 (counting the numb ..."
Abstract

Cited by 13 (4 self)
 Add to MetaCart
We observe that many important computational problems in NC 1 share a simple selfreducibility property. We then show that, for any problem A having this selfreducibility property, A has polynomial size TC 0 circuits if and only if it has TC 0 circuits of size n 1+ɛ for every ɛ>0 (counting the number of wires in a circuit as the size of the circuit). As an example of what this observation yields, consider the Boolean Formula Evaluation problem (BFE), which is complete for NC 1 and has the selfreducibility property. It follows from a lower bound of Impagliazzo, Paturi, and Saks, that BFE requires depth d TC 0 circuits of size n 1+ɛd. If one were able to improve this lower bound to show that there is some constant ɛ>0 such that every TC 0 circuit family recognizing BFE has size n 1+ɛ, then it would follow that TC 0 ̸ = NC 1. We show that proving lower bounds of the form n 1+ɛ is not ruled out by the Natural Proof framework of Razborov and Rudich and hence there is currently no known barrier for separating classes such as ACC 0,TC 0 and NC 1 via existing “natural ” approaches to proving circuit lower bounds. We also show that problems with small uniform constantdepth circuits have algorithms that simultaneously have small space and time bounds. We then make use of known timespace tradeoff lower bounds to show that SAT requires uniform depth d TC 0 and AC 0 [6] circuits of size n 1+c for some constant c depending on d. 1
TimeSpace Tradeoffs for Counting NP Solutions Modulo Integers
 In Proceedings of the 22nd IEEE Conference on Computational Complexity
, 2007
"... We prove the first timespace tradeoffs for counting the number of solutions to an NP problem modulo small integers, and also improve upon known timespace tradeoffs for Sat. Let m> 0 be an integer, and define MODmSat to be the problem of determining if a given Boolean formula has exactly km satisf ..."
Abstract

Cited by 11 (5 self)
 Add to MetaCart
We prove the first timespace tradeoffs for counting the number of solutions to an NP problem modulo small integers, and also improve upon known timespace tradeoffs for Sat. Let m> 0 be an integer, and define MODmSat to be the problem of determining if a given Boolean formula has exactly km satisfying assignments, for some integer k. We show for all primes p except for possibly one of them, and for all c < 2cos(π/7) ≈ 1.801, there is a d> 0 such that MODpSat is not solvable in n c time and n d space by general algorithms. That is, there is at most one prime p that does not satisfy the tradeoff. We prove that the same limitation holds for Sat and MOD6Sat, as well as MODmSat for any composite m that is not a prime power. Our main tool is a general method for rapidly simulating deterministic computations with restricted space, by counting the number of solutions to NP predicates modulo integers. The simulation converts an ordinary algorithm into a “canonical ” one that consumes roughly the same amount of time and space, yet canonical algorithms have nice properties suitable for counting.
An Improved TimeSpace Lower Bounds for Tautologies
"... We show that for all reals c and d such that c 2 d < 4 there exists a positive real e such that tautologies of length n cannot be decided by both a nondeterministic algorithm that runs in time n c, and a nondeterministic algorithm that runs in time n d and space n e. In particular, for every d < 3√ ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
We show that for all reals c and d such that c 2 d < 4 there exists a positive real e such that tautologies of length n cannot be decided by both a nondeterministic algorithm that runs in time n c, and a nondeterministic algorithm that runs in time n d and space n e. In particular, for every d < 3√ 4 there exists a positive e such that tautologies cannot be decided by a nondeterministic algorithm that runs in time n d and space n e.
Automated proofs of time lower bounds
, 2007
"... A fertile area of recent research has demonstrated concrete polynomial time lower bounds for solving natural hard problems on restricted computational models. Among these problems are Satisfiability, Vertex Cover, Hamilton Path, MOD6SAT, MajorityofMajoritySAT, and Tautologies, to name a few. The ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
A fertile area of recent research has demonstrated concrete polynomial time lower bounds for solving natural hard problems on restricted computational models. Among these problems are Satisfiability, Vertex Cover, Hamilton Path, MOD6SAT, MajorityofMajoritySAT, and Tautologies, to name a few. These lower bound proofs all follow a certain diagonalizationbased proofbycontradiction strategy. A pressing open problem has been to determine how powerful such proofs can possibly be. We propose an automated theoremproving methodology for studying these lower bound problems. In particular, we prove that the search for better lower bounds can often be turned into a problem of solving a large series of linear programming instances. We describe an implementation of a smallscale theorem prover and discover surprising experimental results. In some settings, our program provides strong evidence that the best known lower bound proofs are already optimal for the current framework, contradicting the consensus intuition; in others, the program guides us to improved lower bounds where none had been known for years.
NonLinear Time Lower Bound for (Succinct) Quantified Boolean Formulas
"... Abstract. We give a reduction from arbitrary languages in alternating time t(n) to quantified Boolean formulas (QBF) describable in O(t(n)) bits. The reduction works for a reasonable succinct encoding of Boolean formulas and for several reasonable machine models, including multitape Turing machines ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
Abstract. We give a reduction from arbitrary languages in alternating time t(n) to quantified Boolean formulas (QBF) describable in O(t(n)) bits. The reduction works for a reasonable succinct encoding of Boolean formulas and for several reasonable machine models, including multitape Turing machines and logarithmiccost RAMs. By a simple diagonalization, it follows that our succinct QBF problem requires superlinear time on those models. To our knowledge this is the first known instance of a nonlinear time lower bound (with no space restriction) for solving a natural linear space problem on a variety of computational models.
NP Problem
"... review articles doi:10.1145/1562164.1562186 It’s one of the fundamental mathematical problems of our time, and its importance grows with the rise of powerful computers. ..."
Abstract
 Add to MetaCart
review articles doi:10.1145/1562164.1562186 It’s one of the fundamental mathematical problems of our time, and its importance grows with the rise of powerful computers.
From RAM to SAT
, 2012
"... Common presentations of the NPcompleteness of SAT suffer from two drawbacks which hinder the scope of this flagship result. First, they do not apply to machines equipped with randomaccess memory, also known as directaccess memory, even though this feature is critical in basic algorithms. Second, ..."
Abstract
 Add to MetaCart
Common presentations of the NPcompleteness of SAT suffer from two drawbacks which hinder the scope of this flagship result. First, they do not apply to machines equipped with randomaccess memory, also known as directaccess memory, even though this feature is critical in basic algorithms. Second, they incur a quadratic blowup in parameters, even though the distinction between, say, linear and quadratic time is often as critical as the one between polynomial and exponential. But the landmark result of a sequence of works overcomes both these drawbacks simultaneously! [HS66, Sch78, PF79, Coo88, GS89, Rob91] The proof of this result is simplified by Van Melkebeek in [vM06, §2.3.1]. Compared to previous proofs, this proof more directly reduces randomaccess machines to SAT, bypassing sequential Turing machines, and using a simple, wellknown sorting algorithm: OddEven Merge sort [Bat68]. In this work we give a selfcontained rendering of this simpler proof. For context, we note that the impressive works [BSCGT12b, BSCGT12a] give the stronger type of reduction where a candidate satisfying assignment to the SAT instance can be verified