Results 1  10
of
47
Nonuniform ACC circuit lower bounds
, 2010
"... The class ACC consists of circuit families with constant depth over unbounded fanin AND, OR, NOT, and MODm gates, where m> 1 is an arbitrary constant. We prove: • NTIME[2 n] does not have nonuniform ACC circuits of polynomial size. The size lower bound can be slightly strengthened to quasipoly ..."
Abstract

Cited by 46 (7 self)
 Add to MetaCart
(Show Context)
The class ACC consists of circuit families with constant depth over unbounded fanin AND, OR, NOT, and MODm gates, where m> 1 is an arbitrary constant. We prove: • NTIME[2 n] does not have nonuniform ACC circuits of polynomial size. The size lower bound can be slightly strengthened to quasipolynomials and other less natural functions. • ENP, the class of languages recognized in 2O(n) time with an NP oracle, doesn’t have nonuniform ACC circuits of 2no(1) size. The lower bound gives an exponential sizedepth tradeoff: for every d there is a δ> 0 such that ENP doesn’t have depthd ACC circuits of size 2nδ. Previously, it was not known whether EXP NP had depth3 polynomial size circuits made out of only MOD6 gates. The highlevel strategy is to design faster algorithms for the circuit satisfiability problem over ACC circuits, then prove that such algorithms entail the above lower bounds. The algorithm combines known properties of ACC with fast rectangular matrix multiplication and dynamic programming, while the second step requires a subtle strengthening of the author’s prior work [STOC’10]. Supported by the Josef Raviv Memorial Fellowship.
On the possibility of faster SAT algorithms
"... We describe reductions from the problem of determining the satisfiability of Boolean CNF formulas (CNFSAT) to several natural algorithmic problems. We show that attaining any of the following bounds would improve the state of the art in algorithms for SAT: • an O(n k−ε) algorithm for kDominating S ..."
Abstract

Cited by 37 (3 self)
 Add to MetaCart
We describe reductions from the problem of determining the satisfiability of Boolean CNF formulas (CNFSAT) to several natural algorithmic problems. We show that attaining any of the following bounds would improve the state of the art in algorithms for SAT: • an O(n k−ε) algorithm for kDominating Set, for any k ≥ 3, • a (computationally efficient) protocol for 3party set disjointness with o(m) bits of communication, • an n o(d) algorithm for dSUM, • an O(n 2−ε) algorithm for 2SAT with m = n 1+o(1) clauses, where two clauses may have unrestricted length, and • an O((n + m) k−ε) algorithm for HornSat with k unrestricted length clauses. One may interpret our reductions as new attacks on the complexity of SAT, or sharp lower bounds conditional on exponential hardness of SAT.
Annotations in Data Streams
, 2009
"... The central goal of data stream algorithms is to process massive streams of data using sublinear storage space. Motivated by work in the database community on outsourcing database and data stream processing, we ask whether the space usage of such algorithms be further reduced by enlisting a more pow ..."
Abstract

Cited by 20 (8 self)
 Add to MetaCart
(Show Context)
The central goal of data stream algorithms is to process massive streams of data using sublinear storage space. Motivated by work in the database community on outsourcing database and data stream processing, we ask whether the space usage of such algorithms be further reduced by enlisting a more powerful “helper ” who can annotate the stream as it is read. We do not wish to blindly trust the helper, so we require that the algorithm be convinced of having computed a correct answer. We show upper bounds that achieve a nontrivial tradeoff between the amount of annotation used and the space required to verify it. We also prove lower bounds on such tradeoffs, often nearly matching the upper bounds, via notions related to MerlinArthur communication complexity. Our results cover the classic data stream problems of selection, frequency moments, and fundamental graph problems such as trianglefreeness and connectivity. Our work is also part of a growing trend — including recent studies of multipass streaming, read/write streams and randomly ordered streams — of asking more complexitytheoretic questions about data stream processing. It is a recognition that, in addition to practical relevance, the data stream model raises many interesting theoretical questions in its own right. 1
Geometric Complexity Theory V: Equivalence between blackbox derandomization of polynomial identity testing and derandomization of Noether’s Normalization Lemma
"... It is shown that blackbox derandomization of polynomial identity testing (PIT) is essentially equivalent to derandomization of Noether’s Normalization Lemma for explicit algebraic varieties, the problem that lies at the heart of the foundational classification problem of algebraic geometry. Specif ..."
Abstract

Cited by 10 (1 self)
 Add to MetaCart
(Show Context)
It is shown that blackbox derandomization of polynomial identity testing (PIT) is essentially equivalent to derandomization of Noether’s Normalization Lemma for explicit algebraic varieties, the problem that lies at the heart of the foundational classification problem of algebraic geometry. Specifically: (1) It is shown that in characteristic zero blackbox derandomization of the symbolic trace identity testing (STIT) brings the problem of derandomizing Noether’s Normalization Lemma for the ring of invariants of the adjoint action of the general linear group on a tuple of matrices from EXPSPACE (where it is currently) to P. Next it is shown that assuming the Generalized Riemann Hypothesis (GRH), instead of the blackbox derandomization hypothesis, brings the problem from EXPSPACE to quasiPH, instead of P. Thus blackbox derandomization
On the Power of SmallDepth Computation
, 2009
"... In this work we discuss selected topics on smalldepth computation, presenting a few unpublished proofs along the way. The four chapters contain: 1. A unified treatment of the challenge of exhibiting explicit functions that have small correlation with lowdegree polynomials over {0, 1}. 2. An unpubl ..."
Abstract

Cited by 9 (5 self)
 Add to MetaCart
In this work we discuss selected topics on smalldepth computation, presenting a few unpublished proofs along the way. The four chapters contain: 1. A unified treatment of the challenge of exhibiting explicit functions that have small correlation with lowdegree polynomials over {0, 1}. 2. An unpublished proof that small boundeddepth circuits (AC 0) have exponentially small correlation with the parity function. The proof is due to Klivans and Vadhan; it builds upon and simplifies previous ones. 3. Valiant’s simulation of logdepth linearsize circuits of fanin 2 by subexponential size circuits of depth 3 and unbounded fanin. To our knowledge, a proof of this result has never appeared in full.
Why philosophers should care about computational complexity
 In Computability: Gödel, Turing, Church, and beyond (eds
, 2012
"... One might think that, once we know something is computable, how efficiently it can be computed is a practical question with little further philosophical importance. In this essay, I offer a detailed casethat onewouldbe wrong. In particular, I arguethat computational complexity theory—the field that ..."
Abstract

Cited by 9 (0 self)
 Add to MetaCart
(Show Context)
One might think that, once we know something is computable, how efficiently it can be computed is a practical question with little further philosophical importance. In this essay, I offer a detailed casethat onewouldbe wrong. In particular, I arguethat computational complexity theory—the field that studies the resources (such as time, space, and randomness) needed to solve computational problems—leads to new perspectives on the nature of mathematical knowledge, the strong AI debate, computationalism, the problem of logical omniscience, Hume’s problem of induction, Goodman’s grue riddle, the foundations of quantum mechanics, economic rationality, closed timelike curves, and several other topics of philosophical interest. I end by discussing
ArthurMerlin Streaming Complexity
, 2013
"... We study the power of ArthurMerlin probabilistic proof systems in the data stream model. We show a canonical AM streaming algorithm for a wide class of data stream problems. The algorithm offers a tradeoff between the length of the proof and the space complexity that is needed to verify it. As an a ..."
Abstract

Cited by 9 (1 self)
 Add to MetaCart
We study the power of ArthurMerlin probabilistic proof systems in the data stream model. We show a canonical AM streaming algorithm for a wide class of data stream problems. The algorithm offers a tradeoff between the length of the proof and the space complexity that is needed to verify it. As an application, we give an AM streaming algorithm for the Distinct Elements problem. Given a data stream of length m over alphabet of size n, the algorithm uses Õ(s) space and a proof of size Õ(w), for every s, w such that s · w ≥ n (where Õ hides a polylog(m, n) factor). We also prove a lower bound, showing that every MA streaming algorithm for the Distinct Elements problem that uses s bits of space and a proof of size w, satisfies s · w = Ω(n). As a part of the proof of the lower bound for the Distinct Elements problem, we show a new lower bound of Ω ( √ n) on the MA communication complexity of the Gap Hamming Distance problem, and prove its tightness. Keywords:
On P vs. NP and Geometric Complexity Theory Dedicated to Sri Ramakrishna
, 2011
"... This article gives an overview of the geometric complexity theory (GCT) approach towards the P vs. NP and related problems focussing on its main complexity theoretic results. These are: (1) two concrete lower bounds, which are currently the best known lower bounds in the context of the P vs. NC and ..."
Abstract

Cited by 8 (0 self)
 Add to MetaCart
This article gives an overview of the geometric complexity theory (GCT) approach towards the P vs. NP and related problems focussing on its main complexity theoretic results. These are: (1) two concrete lower bounds, which are currently the best known lower bounds in the context of the P vs. NC and permanent vs. determinant problems, (2) the Flip Theorem, which formalizes the self referential paradox in the P vs. NP problem, and (3) the Decomposition Theorem, which decomposes the arithmetic P vs. NP and permanent vs. determinant problems into subproblems without self referential difficulty, consisting of positivity hypotheses in algebraic geometry and representation theory and easier hardness hypotheses. 1
Communication Complexity With Synchronized Clocks
"... Abstract—We consider two natural extensions of the communication complexity model that are inspired by distributed computing. In both models, two parties are equipped with synchronized discrete clocks, and we assume that a bit can be sent from one party to another in one step of time. Both models al ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
(Show Context)
Abstract—We consider two natural extensions of the communication complexity model that are inspired by distributed computing. In both models, two parties are equipped with synchronized discrete clocks, and we assume that a bit can be sent from one party to another in one step of time. Both models allow implicit communication, by allowing the parties to choose whether to send a bit during each step. We examine tradeoffs between time (total number of possible time steps elapsed) and communication (total number of bits actually sent). In the synchronized bit model, we measure the total number of bits sent between the two parties (e.g., email). We show that, in this model, communication costs can differ from the usual communication complexity by a factor roughly logarithmic in the number of time steps, and no more than such a factor. In the synchronized connection model, both parties choose whether or not to open their end of the communication channel at each time step. An exchange of bits takes place only when both ends of the channel are open (e.g., instant messaging), in which case we say that a connection has occurred. If a party does not open its end, it does not learn whether the other party opened its channel. When we restrict the number of time steps to be polynomial in the input length, and the number of connections to be polylogarithmic in the input length, the class of problems solved with this model turns out to be roughly equivalent to the communication complexity analogue of P NP ([BFS86]). Using our new model, we give what we believe to be the first lower bounds for this class, separating P NP from Σ2 ∩ Π2 in the communication complexity setting. Although these models are both quite natural, they have unexpected power, and lead to a refinement of problem classifications in communication complexity. I.
NonInteractive Proofs of Proximity
, 2013
"... We initiate a study of noninteractive proofs of proximity. These proofsystems consist of a verifier that wishes to ascertain the validity of a given statement, using a short (sublinear length) explicitly given proof, and a sublinear number of queries to its input. Since the verifier cannot even re ..."
Abstract

Cited by 6 (1 self)
 Add to MetaCart
We initiate a study of noninteractive proofs of proximity. These proofsystems consist of a verifier that wishes to ascertain the validity of a given statement, using a short (sublinear length) explicitly given proof, and a sublinear number of queries to its input. Since the verifier cannot even read the entire input, we only require it to reject inputs that are far from being valid. Thus, the verifier is only assured of the proximity of the statement to a correct one. Such proofsystems can be viewed as the N P (or more accurately MA) analogue of property testing. We explore both the power and limitations of noninteractive proofs of proximity. We show that such proofsystems can be exponentially stronger than property testers, but are exponentially weaker than the interactive proofs of proximity studied by Rothblum, Vadhan and Wigderson (STOC 2013). In addition, we show a natural problem that has a full and (almost) tight multiplicative tradeoff between the length of the proof and the verifier’s query complexity. On the negative side, we also show that there exist properties for which even a linearlylong (noninteractive) proof of proximity cannot significantly reduce the query complexity.