Results 1  10
of
24
Nonuniform ACC circuit lower bounds
, 2010
"... The class ACC consists of circuit families with constant depth over unbounded fanin AND, OR, NOT, and MODm gates, where m> 1 is an arbitrary constant. We prove: • NTIME[2 n] does not have nonuniform ACC circuits of polynomial size. The size lower bound can be slightly strengthened to quasipolynom ..."
Abstract

Cited by 19 (4 self)
 Add to MetaCart
The class ACC consists of circuit families with constant depth over unbounded fanin AND, OR, NOT, and MODm gates, where m> 1 is an arbitrary constant. We prove: • NTIME[2 n] does not have nonuniform ACC circuits of polynomial size. The size lower bound can be slightly strengthened to quasipolynomials and other less natural functions. • ENP, the class of languages recognized in 2O(n) time with an NP oracle, doesn’t have nonuniform ACC circuits of 2no(1) size. The lower bound gives an exponential sizedepth tradeoff: for every d there is a δ> 0 such that ENP doesn’t have depthd ACC circuits of size 2nδ. Previously, it was not known whether EXP NP had depth3 polynomial size circuits made out of only MOD6 gates. The highlevel strategy is to design faster algorithms for the circuit satisfiability problem over ACC circuits, then prove that such algorithms entail the above lower bounds. The algorithm combines known properties of ACC with fast rectangular matrix multiplication and dynamic programming, while the second step requires a subtle strengthening of the author’s prior work [STOC’10]. Supported by the Josef Raviv Memorial Fellowship.
On the possibility of faster SAT algorithms
"... We describe reductions from the problem of determining the satisfiability of Boolean CNF formulas (CNFSAT) to several natural algorithmic problems. We show that attaining any of the following bounds would improve the state of the art in algorithms for SAT: • an O(n k−ε) algorithm for kDominating S ..."
Abstract

Cited by 10 (1 self)
 Add to MetaCart
We describe reductions from the problem of determining the satisfiability of Boolean CNF formulas (CNFSAT) to several natural algorithmic problems. We show that attaining any of the following bounds would improve the state of the art in algorithms for SAT: • an O(n k−ε) algorithm for kDominating Set, for any k ≥ 3, • a (computationally efficient) protocol for 3party set disjointness with o(m) bits of communication, • an n o(d) algorithm for dSUM, • an O(n 2−ε) algorithm for 2SAT with m = n 1+o(1) clauses, where two clauses may have unrestricted length, and • an O((n + m) k−ε) algorithm for HornSat with k unrestricted length clauses. One may interpret our reductions as new attacks on the complexity of SAT, or sharp lower bounds conditional on exponential hardness of SAT.
Annotations in Data Streams
, 2009
"... The central goal of data stream algorithms is to process massive streams of data using sublinear storage space. Motivated by work in the database community on outsourcing database and data stream processing, we ask whether the space usage of such algorithms be further reduced by enlisting a more pow ..."
Abstract

Cited by 8 (5 self)
 Add to MetaCart
The central goal of data stream algorithms is to process massive streams of data using sublinear storage space. Motivated by work in the database community on outsourcing database and data stream processing, we ask whether the space usage of such algorithms be further reduced by enlisting a more powerful “helper ” who can annotate the stream as it is read. We do not wish to blindly trust the helper, so we require that the algorithm be convinced of having computed a correct answer. We show upper bounds that achieve a nontrivial tradeoff between the amount of annotation used and the space required to verify it. We also prove lower bounds on such tradeoffs, often nearly matching the upper bounds, via notions related to MerlinArthur communication complexity. Our results cover the classic data stream problems of selection, frequency moments, and fundamental graph problems such as trianglefreeness and connectivity. Our work is also part of a growing trend — including recent studies of multipass streaming, read/write streams and randomly ordered streams — of asking more complexitytheoretic questions about data stream processing. It is a recognition that, in addition to practical relevance, the data stream model raises many interesting theoretical questions in its own right. 1
On the Power of SmallDepth Computation
, 2009
"... In this work we discuss selected topics on smalldepth computation, presenting a few unpublished proofs along the way. The four chapters contain: 1. A unified treatment of the challenge of exhibiting explicit functions that have small correlation with lowdegree polynomials over {0, 1}. 2. An unpubl ..."
Abstract

Cited by 5 (4 self)
 Add to MetaCart
In this work we discuss selected topics on smalldepth computation, presenting a few unpublished proofs along the way. The four chapters contain: 1. A unified treatment of the challenge of exhibiting explicit functions that have small correlation with lowdegree polynomials over {0, 1}. 2. An unpublished proof that small boundeddepth circuits (AC 0) have exponentially small correlation with the parity function. The proof is due to Klivans and Vadhan; it builds upon and simplifies previous ones. 3. Valiant’s simulation of logdepth linearsize circuits of fanin 2 by subexponential size circuits of depth 3 and unbounded fanin. To our knowledge, a proof of this result has never appeared in full.
On basing ZK ̸= BPP on the hardness of pac learning
 In In Proc. CCC ’09
, 2009
"... Learning is a central task in computer science, and there are various formalisms for capturing the notion. One important model studied in computational learning theory is the PAC model of Valiant (CACM 1984). On the other hand, in cryptography the notion of “learning nothing” is often modelled by th ..."
Abstract

Cited by 4 (3 self)
 Add to MetaCart
Learning is a central task in computer science, and there are various formalisms for capturing the notion. One important model studied in computational learning theory is the PAC model of Valiant (CACM 1984). On the other hand, in cryptography the notion of “learning nothing” is often modelled by the simulation paradigm: in an interactive protocol, a party learns nothing if it can produce a transcript of the protocol by itself that is indistinguishable from what it gets by interacting with other parties. The most famous example of this paradigm is zero knowledge proofs, introduced by Goldwasser, Micali, and Rackoff (SICOMP 1989). Applebaum et al. (FOCS 2008) observed that a theorem of Ostrovsky and Wigderson (ISTCS 1993) combined with the transformation of oneway functions to pseudorandom functions (H˚astad et al. SICOMP 1999, Goldreich et al. J. ACM 1986) implies that if there exist nontrivial languages with zeroknowledge arguments, then no efficient algorithm can PAC learn polynomialsize circuits. They also prove a weak reverse implication, that if a certain nonstandard learning task is hard, then zero knowledge is nontrivial. This motivates the question we explore here: can one prove that hardness of PAC learning is equivalent to nontriviality of
Why philosophers should care about computational complexity
 In Computability: Gödel, Turing, Church, and beyond (eds
, 2012
"... One might think that, once we know something is computable, how efficiently it can be computed is a practical question with little further philosophical importance. In this essay, I offer a detailed casethat onewouldbe wrong. In particular, I arguethat computational complexity theory—the field that ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
One might think that, once we know something is computable, how efficiently it can be computed is a practical question with little further philosophical importance. In this essay, I offer a detailed casethat onewouldbe wrong. In particular, I arguethat computational complexity theory—the field that studies the resources (such as time, space, and randomness) needed to solve computational problems—leads to new perspectives on the nature of mathematical knowledge, the strong AI debate, computationalism, the problem of logical omniscience, Hume’s problem of induction, Goodman’s grue riddle, the foundations of quantum mechanics, economic rationality, closed timelike curves, and several other topics of philosophical interest. I end by discussing
ArthurMerlin Streaming Complexity
, 2013
"... We study the power of ArthurMerlin probabilistic proof systems in the data stream model. We show a canonical AM streaming algorithm for a wide class of data stream problems. The algorithm offers a tradeoff between the length of the proof and the space complexity that is needed to verify it. As an a ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
We study the power of ArthurMerlin probabilistic proof systems in the data stream model. We show a canonical AM streaming algorithm for a wide class of data stream problems. The algorithm offers a tradeoff between the length of the proof and the space complexity that is needed to verify it. As an application, we give an AM streaming algorithm for the Distinct Elements problem. Given a data stream of length m over alphabet of size n, the algorithm uses Õ(s) space and a proof of size Õ(w), for every s, w such that s · w ≥ n (where Õ hides a polylog(m, n) factor). We also prove a lower bound, showing that every MA streaming algorithm for the Distinct Elements problem that uses s bits of space and a proof of size w, satisfies s · w = Ω(n). As a part of the proof of the lower bound for the Distinct Elements problem, we show a new lower bound of Ω ( √ n) on the MA communication complexity of the Gap Hamming Distance problem, and prove its tightness. Keywords:
FixedPolynomial Size Circuit Bounds
"... Abstract—In 1982, Kannan showed that Σ P 2 does not have n ksized circuits for any k. Do smaller classes also admit such circuit lower bounds? Despite several improvements of Kannan’s result, we still cannot prove that P NP does not have linear size circuits. Work of Aaronson and Wigderson provides ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
Abstract—In 1982, Kannan showed that Σ P 2 does not have n ksized circuits for any k. Do smaller classes also admit such circuit lower bounds? Despite several improvements of Kannan’s result, we still cannot prove that P NP does not have linear size circuits. Work of Aaronson and Wigderson provides strong evidence – the “algebrization ” barrier – that current techniques have inherent limitations in this respect. We explore questions about fixedpolynomial size circuit lower bounds around and beyond the algebrization barrier. We find several connections, including The following are equivalent: – NP is in SIZE(n k) (has O(n k)size circuit families) for some k – For each c, P NP[nc] k is in SIZE(n) for some k – ONP/1 is in SIZE(n k) for some k, where ONP is the class of languages accepted obliviously by NP machines, with witnesses for “yes ” instances depending only on the input length. For a large number of natural classes C and all k � 1, C is in SIZE(n k) if and only if C/1 ∩P/poly is in SIZE(n k). If there is a d such that MATIME(n) ⊆ NTIME(n d), then P NP does not have O(n k) size circuits for any k> 0. One cannot show n 2size circuit lower bounds for ⊕P without new nonrelativizing techniques. In particular, the proof that PP ̸ ⊆ SIZE(n k) for all k relies on the (relativizing) result that P PP ⊆ MA = ⇒ PP ̸ ⊆ SIZE(n k), and we give an oracle relative to which P ⊕P ⊆ MA and ⊕P ⊆ SIZE(n 2) both hold. I.
An Axiomatic Approach to Algebrization
"... Nonrelativization of complexity issues can be interpreted as giving some evidence that these issues cannot be resolved by “blackbox ” techniques. In the early 1990’s, a sequence of important nonrelativizing results was proved, mainly using algebraic techniques. Two approaches have been proposed t ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
Nonrelativization of complexity issues can be interpreted as giving some evidence that these issues cannot be resolved by “blackbox ” techniques. In the early 1990’s, a sequence of important nonrelativizing results was proved, mainly using algebraic techniques. Two approaches have been proposed to understand the power and limitations of these algebraic techniques: (1) Fortnow [12] gives a construction of a class of oracles which have a similar algebraic and logical structure, although they are arbitrarily powerful. He shows that many of the nonrelativizing results proved using algebraic techniques hold for all such oracles, but he does not show, e.g., that the outcome of the “P vs. NP ” question differs between different oracles in that class. (2) Aaronson and Wigderson [1] give definitions of algebrizing separations and
A Status Report on the P versus NP Question
"... We survey some of the history of the most famous open question in computing: the P versus NP question. We summarize some of the progress that has been made to date, and assess the current situation. ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
We survey some of the history of the most famous open question in computing: the P versus NP question. We summarize some of the progress that has been made to date, and assess the current situation.