Results 1  10
of
49
The Complexity Of Propositional Proofs
 Bulletin of Symbolic Logic
, 1995
"... This paper of Tseitin is a landmark as the first to give nontrivial lower bounds for propositional proofs; although it predates the first papers on ..."
Abstract

Cited by 109 (3 self)
 Add to MetaCart
(Show Context)
This paper of Tseitin is a landmark as the first to give nontrivial lower bounds for propositional proofs; although it predates the first papers on
PP is Closed Under Intersection
 Journal of Computer and System Sciences
, 1991
"... In his seminal paper on probabilistic Turing machines, Gill [13] asked whether the class PP is closed under intersection and union. We give a positive answer to this question. We also show that PP is closed under a variety of polynomialtime truthtable reductions. Consequences in complexity theory ..."
Abstract

Cited by 95 (9 self)
 Add to MetaCart
In his seminal paper on probabilistic Turing machines, Gill [13] asked whether the class PP is closed under intersection and union. We give a positive answer to this question. We also show that PP is closed under a variety of polynomialtime truthtable reductions. Consequences in complexity theory include the definite collapse and (assuming P<F NaN> 6= PP) separation of certain query hierarchies over PP. Similar techniques allow us to combine several threshold gates into a single threshold gate. Consequences in the study of circuits include the simulation of circuits with a small number of threshold gates by circuits having only a single threshold gate at the root (perceptrons), and a lower bound on the number of threshold gates needed to compute the parity function. 1. Introduction The class PP was defined in 1972 by John Gill [13, 14] and independently by Janos Simon [26] in 1974. PP is the class of languages accepted by a polynomialtime bounded nondeterministic Turing machine t...
On the compilability and expressive power of propositional planning formalisms
, 1998
"... The recent approaches of extending the GRAPHPLAN algorithm to handle more expressive planning formalisms raise the question of what the formal meaning of “expressive power ” is. We formalize the intuition that expressive power is a measure of how concisely planning domains and plans can be expressed ..."
Abstract

Cited by 88 (10 self)
 Add to MetaCart
The recent approaches of extending the GRAPHPLAN algorithm to handle more expressive planning formalisms raise the question of what the formal meaning of “expressive power ” is. We formalize the intuition that expressive power is a measure of how concisely planning domains and plans can be expressed in a particular formalism by introducing the notion of “compilation schemes ” between planning formalisms. Using this notion, we analyze the expressiveness of a large family of propositional planning formalisms, ranging from basic STRIPS to a formalism with conditional effects, partial state specifications, and propositional formulae in the preconditions. One of the results is that conditional effects cannot be compiled away if plan size should grow only linearly but can be compiled away if we allow for polynomial growth of the resulting plans. This result confirms that the recently proposed extensions to the GRAPHPLAN algorithm concerning conditional effects are optimal with respect to the “compilability ” framework. Another result is that general propositional formulae cannot be compiled into conditional effects if the plan size should be preserved linearly. This implies that allowing general propositional formulae in preconditions and effect conditions adds another level of difficulty in generating a plan.
A FirstOrder Isomorphism Theorem
 SIAM JOURNAL ON COMPUTING
, 1993
"... We show that for most complexity classes of interest, all sets complete under firstorder projections are isomorphic under firstorder isomorphisms. That is, a very restricted version of the BermanHartmanis Conjecture holds. ..."
Abstract

Cited by 25 (6 self)
 Add to MetaCart
We show that for most complexity classes of interest, all sets complete under firstorder projections are isomorphic under firstorder isomorphisms. That is, a very restricted version of the BermanHartmanis Conjecture holds.
On approximate majority and probabilistic time
 in Proceedings of the 22nd IEEE Conference on Computational Complexity
, 2007
"... We prove new results on the circuit complexity of Approximate Majority, which is the problem of computing Majority of a given bit string whose fraction of 1’s is bounded away from 1/2 (by a constant). We then apply these results to obtain new relationships between probabilistic time, BPTime (t), and ..."
Abstract

Cited by 25 (8 self)
 Add to MetaCart
We prove new results on the circuit complexity of Approximate Majority, which is the problem of computing Majority of a given bit string whose fraction of 1’s is bounded away from 1/2 (by a constant). We then apply these results to obtain new relationships between probabilistic time, BPTime (t), and alternating time, Σ O(1)Time (t). Our main results are the following: 1. We prove that 2 n0.1�size depth3 circuits for Approximate Majority on n bits have bottom fanin Ω(log n). As a corollary we obtain that BPTime (t) �⊆ Σ2Time � o(t 2) � with respect to some oracle. This complements the result that BPTime (t) ⊆ Σ2Time � t 2 · poly log t � with respect to every oracle (Sipser and Gács, STOC ’83; Lautemann, IPL ’83). 2. We prove that Approximate Majority is computable by uniform polynomialsize circuits of depth 3. Prior to our work, the only known polynomialsize depth3 circuits for Approximate Majority were nonuniform (Ajtai, Ann. Pure Appl. Logic ’83). We also prove that BPTime (t) ⊆ Σ3Time (t · poly log t). This complements our results in (1). 3. We prove new lower bounds for solving QSAT 3 ∈ Σ3Time (n · poly log n) on probabilistic computational models. In particular, we prove that solving QSAT 3 requires time n 1+Ω(1) on Turing machines with a randomaccess input tape and a sequentialaccess work tape that is initialized with random bits. No lower bound was previously known on this model (for a function computable in linear space). ∗ Author supported by NSF grant CCR0324906. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author and do not necessarily reflect the views of the
Randomnessefficient sampling within NC1
 Computational Complexity
"... We construct a randomnessefficient averaging sampler that is computable by uniform constantdepth circuits with parity gates (i.e., in uniform AC0[⊕]). Our sampler matches the parameters achieved by random walks on constantdegree expander graphs, allowing us to apply a variety expanderbased techn ..."
Abstract

Cited by 14 (0 self)
 Add to MetaCart
(Show Context)
We construct a randomnessefficient averaging sampler that is computable by uniform constantdepth circuits with parity gates (i.e., in uniform AC0[⊕]). Our sampler matches the parameters achieved by random walks on constantdegree expander graphs, allowing us to apply a variety expanderbased techniques within NC1. For example, we obtain the following results: • Randomnessefficient errorreduction for uniform probabilistic NC1, TC0, AC0[⊕] and AC0: Any function computable by uniform probabilistic circuits with error 1/3 using r random bits is computable by uniform probabilistic circuits with error δ using O(r+log(1/δ)) random bits. • An optimal explicit biased generator in AC0[⊕]: There exists a 1/2Ω(n)biased generator G: {0, 1}O(n) → {0, 1}2n for which poly(n)size uniform AC0[⊕] circuits can compute G(s)i given (s, i) ∈ {0, 1}O(n) × {0, 1}n. This resolves a question raised by Gutfreund and Viola (Random 2004). • uniform BP · AC0 ⊆ uniform AC0/O(n). Our sampler is based on the zigzag graph product of Reingold, Vadhan and Wigderson (Annals of Math 2002) and as part of our analysis we give an elementary proof of a generalization of Gillman’s Chernoff Bound for Expander Walks (FOCS 1998). 1
Protecting Circuits from Leakage: the ComputationallyBounded and Noisy Cases
, 2010
"... Abstract. Physical computational devices leak sidechannel information that may, and often does, reveal secret internal states. We present a general transformation that compiles any circuit into a new, functionally equivalent circuit which is resilient against welldefined classes of leakage. Our co ..."
Abstract

Cited by 13 (1 self)
 Add to MetaCart
(Show Context)
Abstract. Physical computational devices leak sidechannel information that may, and often does, reveal secret internal states. We present a general transformation that compiles any circuit into a new, functionally equivalent circuit which is resilient against welldefined classes of leakage. Our construction requires a small, stateless and computationindependent leakproof component that draws random elements from a fixed distribution. In essence, we reduce the problem of shielding arbitrarily complex circuits to the problem of shielding a single, simple component. Our approach is based on modeling the adversary as a powerful observer that inspects the device via a limited measurement apparatus. We allow the apparatus to access all the bits of the computation (except those inside the leakproof component) and the amount of leaked information to grow unbounded over time. However, we assume that the apparatus is limited either in its computational ability (namely, it lacks the ability to decode certain linear encodings and outputs a limited number of bits per iteration), or its precision (each observed bit is flipped with some probability). While our results apply in general to such leakage classes, in particular, we obtain security against: – Constant depth circuits leakage, where the measurement apparatus can be implemented by an AC 0 circuit (namely, a constant depth circuit composed of NOT gates and unbounded fanin AND and OR gates), or an ACC 0 [p] circuit (which is the same as AC 0, except that it also uses MODp gates) which outputs a limited number of bits. – Noisy leakage, where the measurement apparatus reveals all the bits of the state of the circuit, perturbed by independent binomial noise. Namely, each bit of the computation is perturbed with probability p, and remains unchanged with probability 1 − p. 1
Averagecase complexity of detecting cliques
, 2010
"... The computational problem of testing whether a graph contains a complete subgraph of size k is among the most fundamental problems studied in theoretical computer science. This thesis is concerned with proving lower bounds for kCLIQUE, as this problem is known. Our results show that, in certain mod ..."
Abstract

Cited by 12 (1 self)
 Add to MetaCart
(Show Context)
The computational problem of testing whether a graph contains a complete subgraph of size k is among the most fundamental problems studied in theoretical computer science. This thesis is concerned with proving lower bounds for kCLIQUE, as this problem is known. Our results show that, in certain models of computation, solving kCLIQUE in the average case requires Q(nk/4) resources (moreover, k/4 is tight). Here the models of computation are boundeddepth Boolean circuits and unboundeddepth monotone circuits, the complexity measure is the number of gates, and the input distributions are random graphs with an appropriate density of edges. Such random graphs (the wellstudied ErdosRenyi random graphs) are widely believed to be a source of computationally hard instances for clique problems (as Karp suggested in 1976). Our results are the first unconditional lower bounds supporting this hypothesis. For boundeddepth Boolean circuits, our averagecase hardness result significantly improves the previous worstcase lower bounds of Q(nk/Poly(d)) for depthd circuits. In particular, our lower bound of Q(nk / 4) has no noticeable dependence on d for circuits of depth
A (de)constructive approach to program checking
 Electronic Colloquium on Computational Complexity, 2007. 34 [GMR89] O. Goldreich
, 1989
"... Program checking, program selfcorrecting and program selftesting were pioneered by [Blum and Kannan] and [Blum, Luby and Rubinfeld] in the mid eighties as a new way to gain confidence in software, by considering program correctness on an input by input basis rather than full program verification. ..."
Abstract

Cited by 12 (2 self)
 Add to MetaCart
Program checking, program selfcorrecting and program selftesting were pioneered by [Blum and Kannan] and [Blum, Luby and Rubinfeld] in the mid eighties as a new way to gain confidence in software, by considering program correctness on an input by input basis rather than full program verification. Work in the field of program checking focused on designing, for specific functions, checkers, testers and correctors that are more efficient than the best program known for the function. These were designed utilizing specific algebraic, combinatorial or completeness properties of the function at hand. In this work we introduce a novel composition methodology for improving the efficiency of program checkers. We use this approach to design a variety of program checkers that are provably more efficient, in terms of circuit depth, than the optimal program for computing the function being checked. Extensions of this methodology for the cases of program testers and correctors are also presented. In particular, we show: • For all i ≥ 1, every language in RNC i (that is NC 1hard under NC 0reductions) has a program checker in RNC i−1.
Some Problems Involving RazborovSmolensky Polynomials
, 1991
"... Several recent results in circuit complexity theory have used a representation of Boolean functions by polynomials over finite fields. Our current inability to extend these results to superficially similar situations may be related to properties of these polynomials which do not extend to polyno ..."
Abstract

Cited by 11 (2 self)
 Add to MetaCart
Several recent results in circuit complexity theory have used a representation of Boolean functions by polynomials over finite fields. Our current inability to extend these results to superficially similar situations may be related to properties of these polynomials which do not extend to polynomials over general finite rings or finite abelian groups. Here we pose a number of conjectures on the behavior of such polynomials over rings and groups, and present some partial results toward proving them. 1. Introduction 1.1. Polynomials and Circuit Complexity The representation of Boolean functions as polynomials over the finite field Z 2 = f0; 1g dates back to early work in switching theory [?]. A formal language L can be identified with the family of functions f i : Z i 2 ! Z 2 , where f i (x 1 ; : : : ; x i ) = 1 iff x 1 : : : x i 2 L. Each of these functions can be written as a polynomial in the variables x 1 ; : : : ; x n . We can consider algebraic formulas or circuits with...