Results 1  10
of
43
A Separator Theorem for Planar Graphs f
, 1977
"... Let G be any nvertex planar graph. We prove that the vertices of G can be partitioned into three sets A, B, C such that no edge joins a vertex in A with a vertex in B, neither A nor B contains more than 2n/3 vertices, and C contains no more than 2& & vertices. We exhibit an algorithm which finds su ..."
Abstract

Cited by 397 (1 self)
 Add to MetaCart
Let G be any nvertex planar graph. We prove that the vertices of G can be partitioned into three sets A, B, C such that no edge joins a vertex in A with a vertex in B, neither A nor B contains more than 2n/3 vertices, and C contains no more than 2& & vertices. We exhibit an algorithm which finds such a partition A, B, C in O(n) time.
Provably efficient scheduling for languages with finegrained parallelism
 IN PROC. SYMPOSIUM ON PARALLEL ALGORITHMS AND ARCHITECTURES
, 1995
"... Many highlevel parallel programming languages allow for finegrained parallelism. As in the popular worktime framework for parallel algorithm design, programs written in such languages can express the full parallelism in the program without specifying the mapping of program tasks to processors. A ..."
Abstract

Cited by 82 (25 self)
 Add to MetaCart
Many highlevel parallel programming languages allow for finegrained parallelism. As in the popular worktime framework for parallel algorithm design, programs written in such languages can express the full parallelism in the program without specifying the mapping of program tasks to processors. A common concern in executing such programs is to schedule tasks to processors dynamically so as to minimize not only the execution time, but also the amount of space (memory) needed. Without careful scheduling, the parallel execution on p processors can use a factor of p or larger more space than a sequential implementation of the same program. This paper first identifies a class of parallel schedules that are provably efficient in both time and space. For any
A subexponentialtime quantum algorithm for the dihedral hidden subgroup problem
, 2003
"... Abstract. We present a quantum algorithm for the dihedral hidden subgroup problem (DHSP) with time and query complexity 2O(√log N). In this problem an oracle computes a function f on the dihedral group DN which is invariant under a hidden reflection in DN. By contrast, the classical query complexity ..."
Abstract

Cited by 55 (0 self)
 Add to MetaCart
Abstract. We present a quantum algorithm for the dihedral hidden subgroup problem (DHSP) with time and query complexity 2O(√log N). In this problem an oracle computes a function f on the dihedral group DN which is invariant under a hidden reflection in DN. By contrast, the classical query complexity of DHSP is O ( √ N). The algorithm also applies to the hidden shift problem for an arbitrary finitely generated abelian group. The algorithm begins as usual with a quantum character transform, which in the case of DN is essentially the abelian quantum Fourier transform. This yields the name of a group representation of DN, which is not by itself useful, and a state in the representation, which is a valuable but indecipherable qubit. The algorithm proceeds by repeatedly pairing two unfavorable qubits to make a new qubit in a more favorable representation of DN. Once the algorithm obtains certain target representations, direct measurements reveal the hidden subgroup.
Algebrization: A new barrier in complexity theory
 MIT Theory of Computing Colloquium
, 2007
"... Any proof of P � = NP will have to overcome two barriers: relativization and natural proofs. Yet over the last decade, we have seen circuit lower bounds (for example, that PP does not have linearsize circuits) that overcome both barriers simultaneously. So the question arises of whether there is a ..."
Abstract

Cited by 30 (2 self)
 Add to MetaCart
Any proof of P � = NP will have to overcome two barriers: relativization and natural proofs. Yet over the last decade, we have seen circuit lower bounds (for example, that PP does not have linearsize circuits) that overcome both barriers simultaneously. So the question arises of whether there is a third barrier to progress on the central questions in complexity theory. In this paper we present such a barrier, which we call algebraic relativization or algebrization. The idea is that, when we relativize some complexity class inclusion, we should give the simulating machine access not only to an oracle A, but also to a lowdegree extension of A over a finite field or ring. We systematically go through basic results and open problems in complexity theory to delineate the power of the new algebrization barrier. First, we show that all known nonrelativizing results based on arithmetization—both inclusions such as IP = PSPACE and MIP = NEXP, and separations such as MAEXP � ⊂ P/poly —do indeed algebrize. Second, we show that almost all of the major open problems—including P versus NP, P versus RP, and NEXP versus P/poly—will require nonalgebrizing techniques. In some cases algebrization seems to explain exactly why progress stopped where it did: for example, why we have superlinear circuit lower bounds for PromiseMA but not for NP. Our second set of results follows from lower bounds in a new model of algebraic query complexity, which we introduce in this paper and which is interesting in its own right. Some of our lower bounds use direct combinatorial and algebraic arguments, while others stem from a surprising connection between our model and communication complexity. Using this connection, we are also able to give an MAprotocol for the Inner Product function with O ( √ n log n) communication (essentially matching a lower bound of Klauck), as well as a communication complexity conjecture whose truth would imply NL � = NP. 1
On the Complexity of SAT
, 1999
"... We show that nondeterministic time NT IME(n) is not contained in deterministic time n # 2# and polylogarithmic space, for any # > 0. This implies that (infinitely often) satisfiability cannot be solved in time O(n # 2# ) and polylogarithmic space. A similar result is presented for uniform circui ..."
Abstract

Cited by 25 (1 self)
 Add to MetaCart
We show that nondeterministic time NT IME(n) is not contained in deterministic time n # 2# and polylogarithmic space, for any # > 0. This implies that (infinitely often) satisfiability cannot be solved in time O(n # 2# ) and polylogarithmic space. A similar result is presented for uniform circuits.
Size Space tradeoffs for Resolution
, 2002
"... We investigate tradeoffs of various important complexity measures such as size, space and width. We show examples of CNF formulas that have optimal proofs with respect to any one of these parameters, but optimizing one parameter must cost an increase in the other. These results, the first of their ..."
Abstract

Cited by 22 (4 self)
 Add to MetaCart
We investigate tradeoffs of various important complexity measures such as size, space and width. We show examples of CNF formulas that have optimal proofs with respect to any one of these parameters, but optimizing one parameter must cost an increase in the other. These results, the first of their kind, have implications on the efficiency (or rather, inefficiency) of some commonly used SAT solving heuristics. Our proof
A Guide for New Referees in Theoretical Computer Science
, 1994
"... Your success as a scientist will in part be measured by the quality of your research publications in highquality journals and conference proceedings. Of the three classical rhetorical techniques, it is logos, rather than pathos or ethos, which is most commonly associated with scientific publication ..."
Abstract

Cited by 19 (1 self)
 Add to MetaCart
Your success as a scientist will in part be measured by the quality of your research publications in highquality journals and conference proceedings. Of the three classical rhetorical techniques, it is logos, rather than pathos or ethos, which is most commonly associated with scientific publications. In the mathematical sciences the paradigm for publication is to describe the mathematical proofs of propositions in sufficient detail to allow duplication by interested readers. Quality control is achieved by a system of peer review commonly referred to as refereeing. This guide is an attempt to distill the experience of the theoretical computer science community on the subject of refereeing into a convenient form which can be easily distributed to students and other inexperienced referees. Although aimed primarily at theoretical computer scientists, it contains advice which maybe relevant to other mathematical sciences. It may also be of some use to new authors who are unfamiliar with the peer review process. However, it must be understood that this is not a guide on how to write papers. Authors who are interested in improving their writing skills can consult the "Further Reading" section. The main part of this guide is divided into nine sections. The first section describes the
Derandomizing ArthurMerlin Games under Uniform Assumptions
 Computational Complexity
, 2000
"... We study how the nondeterminism versus determinism problem and the time versus space problem are related to the problem of derandomization. In particular, we show two ways of derandomizing the complexity class AM under uniform assumptions, which was only known previously under nonuniform assumption ..."
Abstract

Cited by 15 (0 self)
 Add to MetaCart
We study how the nondeterminism versus determinism problem and the time versus space problem are related to the problem of derandomization. In particular, we show two ways of derandomizing the complexity class AM under uniform assumptions, which was only known previously under nonuniform assumptions [13, 14]. First, we prove that either AM = NP or it appears to any nondeterministic polynomial time adversary that NP is contained in deterministic subexponential time infinitely often. This implies that to any nondeterministic polynomial time adversary, the graph nonisomorphism problem appears to have subexponentialsize proofs infinitely often, the first nontrivial derandomization of this problem without any assumption. Next, we show that either all BPP = P, AM = NP, and PH P hold, or for any t(n) = 2 n) , DTIME(t(n)) DSPACE(t (n)) infinitely often for any constant > 0. Similar tradeoffs also hold for a whole range of parameters. This improves previous results [17, 5] ...
Narrow proofs may be spacious: Separating space and width in resolution (Extended Abstract)
 REVISION 02, ELECTRONIC COLLOQUIUM ON COMPUTATIONAL COMPLEXITY (ECCC
, 2005
"... The width of a resolution proof is the maximal number of literals in any clause of the proof. The space of a proof is the maximal number of clauses kept in memory simultaneously if the proof is only allowed to infer new clauses from clauses currently in memory. Both of these measures have previously ..."
Abstract

Cited by 13 (7 self)
 Add to MetaCart
The width of a resolution proof is the maximal number of literals in any clause of the proof. The space of a proof is the maximal number of clauses kept in memory simultaneously if the proof is only allowed to infer new clauses from clauses currently in memory. Both of these measures have previously been studied and related to the resolution refutation size of unsatisfiable CNF formulas. Also, the refutation space of a formula has been proven to be at least as large as the refutation width, but it has been open whether space can be separated from width or the two measures coincide asymptotically. We prove that there is a family of kCNF formulas for which the refutation width in resolution is constant but the refutation space is nonconstant, thus solving a problem mentioned in several previous papers.
Towards an Optimal Separation of Space and Length in Resolution
 ELECTRONIC COLLOQUIUM ON COMPUTATIONAL COMPLEXITY
, 2008
"... Most stateoftheart satisfiability algorithms today are variants of the DPLL procedure augmented with clause learning. The main bottleneck for such algorithms, other than the obvious one of time, is the amount of memory used. In the field of proof complexity, the resources of time and memory corre ..."
Abstract

Cited by 13 (10 self)
 Add to MetaCart
Most stateoftheart satisfiability algorithms today are variants of the DPLL procedure augmented with clause learning. The main bottleneck for such algorithms, other than the obvious one of time, is the amount of memory used. In the field of proof complexity, the resources of time and memory correspond to the length and space of resolution proofs. There has been a long line of research trying to understand these proof complexity measures, as well as relating them to the width of proofs, i.e., the size of the largest clause in the proof, which has been shown to be intimately connected with both length and space. While strong results have been proven for length and width, our understanding of space is still quite poor. For instance, it has remained open whether the fact that a formula is provable in short length implies that it is also provable in small space (which is the case for length versus width), or whether on the contrary these measures are completely unrelated in the sense that short proofs can be arbitrarily complex with respect to space. In this paper, we present some evidence that the true answer should be that the latter case holds and provide a possible roadmap for how such an optimal separation result could be obtained. We do this by proving a tight bound of Θ ( √ n) on the space needed for socalled pebbling contradictions over pyramid graphs of size n. This yields the first polynomial lower bound on space that is not a consequence of a corresponding lower bound on width, as well as an improvement of the weak separation of space and width in (Nordström 2006) from logarithmic to polynomial. Also, continuing the line of research initiated by (BenSasson 2002) into tradeoffs between different proof complexity measures, we present a simplified proof of the recent lengthspace tradeoff result in (Hertel and Pitassi 2007), and show how our ideas can be used to prove a couple of other exponential tradeoffs in resolution.