Results 1  10
of
40
Quantum computing, postselection, and probabilistic polynomialtime
, 2004
"... I study the class of problems efficiently solvable by a quantum computer, given the ability to “postselect” on the outcomes of measurements. I prove that this class coincides with a classical complexity class called PP, or Probabilistic PolynomialTime. Using this result, I show that several simple ..."
Abstract

Cited by 70 (14 self)
 Add to MetaCart
I study the class of problems efficiently solvable by a quantum computer, given the ability to “postselect” on the outcomes of measurements. I prove that this class coincides with a classical complexity class called PP, or Probabilistic PolynomialTime. Using this result, I show that several simple changes to the axioms of quantum mechanics would let us solve PPcomplete problems efficiently. The result also implies, as an easy corollary, a celebrated theorem of Beigel, Reingold, and Spielman that PP is closed under intersection, as well as a generalization of that theorem due to Fortnow and Reingold. This illustrates that quantum computing can yield new and simpler proofs of major results about classical computation.
Limitations of Quantum Advice and OneWay Communication
 Theory of Computing
, 2004
"... Although a quantum state requires exponentially many classical bits to describe, the laws of quantum mechanics impose severe restrictions on how that state can be accessed. This paper shows in three settings that quantum messages have only limited advantages over classical ones. ..."
Abstract

Cited by 59 (16 self)
 Add to MetaCart
(Show Context)
Although a quantum state requires exponentially many classical bits to describe, the laws of quantum mechanics impose severe restrictions on how that state can be accessed. This paper shows in three settings that quantum messages have only limited advantages over classical ones.
NPcomplete problems and physical reality
 ACM SIGACT News Complexity Theory Column, March. ECCC
, 2005
"... Can NPcomplete problems be solved efficiently in the physical universe? I survey proposals including soap bubbles, protein folding, quantum computing, quantum advice, quantum adiabatic algorithms, quantummechanical nonlinearities, hidden variables, relativistic time dilation, analog computing, Mal ..."
Abstract

Cited by 57 (6 self)
 Add to MetaCart
(Show Context)
Can NPcomplete problems be solved efficiently in the physical universe? I survey proposals including soap bubbles, protein folding, quantum computing, quantum advice, quantum adiabatic algorithms, quantummechanical nonlinearities, hidden variables, relativistic time dilation, analog computing, MalamentHogarth spacetimes, quantum gravity, closed timelike curves, and “anthropic computing. ” The section on soap bubbles even includes some “experimental ” results. While I do not believe that any of the proposals will let us solve NPcomplete problems efficiently, I argue that by studying them, we can learn something not only about computation but also about physics. 1
Computing Solutions Uniquely Collapses the Polynomial Hierarchy
 SIAM Journal on Computing
, 1993
"... Is there a singlevalued NP function that, when given a satisfiable formula as input, outputs a satisfying assignment? That is, can a nondeterministic function cull just one satisfying assignment from a possibly exponentially large collection of assignments? We show that if there is such a nondeterm ..."
Abstract

Cited by 41 (25 self)
 Add to MetaCart
Is there a singlevalued NP function that, when given a satisfiable formula as input, outputs a satisfying assignment? That is, can a nondeterministic function cull just one satisfying assignment from a possibly exponentially large collection of assignments? We show that if there is such a nondeterministic function, then the polynomial hierarchy collapses to its second level. As the existence of such a function is known to be equivalent to the statement "every multivalued NP function has a singlevalued NP refinement," our result provides the strongest evidence yet that multivalued NP functions cannot be refined. We prove our result via theorems of independent interest. We say that a set A is NPSVselective (NPMVselective) if there is a 2ary partial function in NPSV (NPMV, respectively) that decides which of its inputs (if any) is "more likely" to belong to A; this is a nondeterministic analog of the recursiontheoretic notion of the semirecursive sets and the extant complexitythe...
The Computational Complexity of Linear Optics
 in Proceedings of STOC 2011
"... We give new evidence that quantum computers—moreover, rudimentary quantumcomputers built entirely out of linearoptical elements—cannotbeefficientlysimulatedbyclassical computers. In particular, we define a model of computation in which identical photons are generated, sent through a linearoptical n ..."
Abstract

Cited by 32 (8 self)
 Add to MetaCart
We give new evidence that quantum computers—moreover, rudimentary quantumcomputers built entirely out of linearoptical elements—cannotbeefficientlysimulatedbyclassical computers. In particular, we define a model of computation in which identical photons are generated, sent through a linearoptical network, then nonadaptively measured to count the number of photons in each mode. This model is not known or believed to be universal for quantum computation, and indeed, we discuss the prospects for realizing the model using current technology. On the other hand, we prove that the model is able to solve sampling problems and search problems that are classically intractable under plausible assumptions. Our first result says that, if there exists a polynomialtime classical algorithm that samples from the same probability distribution as a linearoptical network, then P #P = BPP NP, and hence the polynomial hierarchy collapses to the third level. Unfortunately, this result assumes an extremely accurate simulation. Our main result suggests that even an approximate or noisy classical simulation would already imply a collapse of the polynomial hierarchy. For this, we need two unproven conjectures: the PermanentofGaussians Conjecture, which says that it is #Phard to approximate the permanent of a matrixAofindependentN (0,1)Gaussianentries, withhigh probability over A; and the Permanent AntiConcentration Conjecture, which says that Per(A)  ≥ √ n!/poly(n) with high probability over A. We present evidence for these conjectures, both of which seem interesting even apart from our application. For the 96page full version, see www.scottaaronson.com/papers/optics.pdf
On Balanced vs. Unbalanced Computation Trees
"... A great number of complexity classes between P and PSPACE can be defined via leaf languages for computation trees of nondeterministic polynomial time machines. Jenner, McKenzie, and Th'erien (Proceedings of the 9th Conference on Structure in Complexity Theory, 1994) raised the issue of whether ..."
Abstract

Cited by 31 (9 self)
 Add to MetaCart
A great number of complexity classes between P and PSPACE can be defined via leaf languages for computation trees of nondeterministic polynomial time machines. Jenner, McKenzie, and Th'erien (Proceedings of the 9th Conference on Structure in Complexity Theory, 1994) raised the issue of whether considering balanced or unbalanced trees makes any difference. For a number of leaf language classes, coincidence of both models was shown, but for the very prominent example of leaf language classes from the alternating logarithmic time hierarchy the question was left open. It was only proved that in the balanced case these classes exactly characterize the classes from the polynomial time hierarchy. Here, we show that balanced trees apparently make a difference: In the unbalanced case, a class from the logarithmic time hierarchy characterizes the corresponding class from the polynomial time hierarchy with a PPoracle. Along the way, we get an interesting normal form for PP computations.
BQP and the polynomial hierarchy
 in Proceedings of the 42nd ACM symposium on Theory of computing, STOC ’10
, 2010
"... ar ..."
(Show Context)
Rectangle Size Bounds and Threshold Covers in Communication Complexity
 In Proceedings Eighteenth Annual IEEE Conference on Computational Complexity
, 2003
"... We investigate the power of the most important lower bound technique in randomized communication complexity, which is based on an evaluation of the maximal size of approximately monochromatic rectangles, minimized over all distributions on the inputs. While it is known that the 0error version of th ..."
Abstract

Cited by 30 (5 self)
 Add to MetaCart
(Show Context)
We investigate the power of the most important lower bound technique in randomized communication complexity, which is based on an evaluation of the maximal size of approximately monochromatic rectangles, minimized over all distributions on the inputs. While it is known that the 0error version of this bound is polynomially tight for deterministic communication, nothing in this direction is known for constant error and randomized communication complexity. We rst study a onesided version of this bound and obtain that its value lies between the MA and AMcomplexities of the considered function. Hence the lower bound actually works for a (communication complexity) class between MA\co MA and AM\co AM . We also show that the MAcomplexity of the disjointness problem is n). Following this we consider the conjecture that the lower bound method is polynomially tight for randomized communication complexity. First we disprove a distributional version of this conjecture. Then we give a combinatorial characterization of the value of the lower bound method, in which the optimization over all distributions is absent. This characterization is done by what we call a uniform threshold cover. We also study relaxations of this notion, namely approximate majority covers and majority covers, and compare these three notions in power, exhibiting exponential separations. Each of these covers captures a lower bound method previously used for randomized communication complexity.
Computational complexity of the landscape
 I
"... Abstract: We study the computational complexity of the physical problem of finding vacua of string theory which agree with data, such as the cosmological constant, and show that such problems are typically NP hard. In particular, we prove that in the BoussoPolchinski model, the problem is NP comple ..."
Abstract

Cited by 24 (2 self)
 Add to MetaCart
(Show Context)
Abstract: We study the computational complexity of the physical problem of finding vacua of string theory which agree with data, such as the cosmological constant, and show that such problems are typically NP hard. In particular, we prove that in the BoussoPolchinski model, the problem is NP complete. We discuss the issues this raises and the possibility that, even if we were to find compelling evidence that some vacuum of string theory describes our universe, we might never be able to find that vacuum explicitly. In a companion paper, we apply this point of view to the question of how early cosmology might select a vacuum. Contents
Pseudorandomness for approximate counting and sampling
 In Proceedings of the 20th IEEE Conference on Computational Complexity
, 2005
"... We study computational procedures that use both randomness and nondeterminism. Examples are ArthurMerlin games and approximate counting and sampling of NPwitnesses. The goal of this paper is to derandomize such procedures under the weakest possible assumptions. Our main technical contribution allow ..."
Abstract

Cited by 23 (4 self)
 Add to MetaCart
(Show Context)
We study computational procedures that use both randomness and nondeterminism. Examples are ArthurMerlin games and approximate counting and sampling of NPwitnesses. The goal of this paper is to derandomize such procedures under the weakest possible assumptions. Our main technical contribution allows one to “boost” a given hardness assumption. One special case is a proof that EXP � ⊆ NP/poly ⇒ EXP � ⊆ P NP   /poly. In words, if there is a problem in EXP that cannot be computed by polysize nondeterministic circuits then there is one which cannot be computed by polysize circuits that make nonadaptive NP oracle queries. This in particular shows that the various assumptions used over the last few years by several authors to derandomize ArthurMerlin games (i.e., show AM = NP) are in fact all equivalent. In addition to simplifying the framework of AM derandomization, we show that this “unified assumption ” suffices to derandomize several other probabilistic procedures. For these results we define two new primitives that we regard as the natural pseudorandom objects associated with approximate counting and sampling of NPwitnesses. We use the “boosting ” theorem and hashing techniques to construct these primitives using an assumption that is no stronger than that used to derandomize AM. As a consequence, under this assumption, there are deterministic polynomial time algorithms that use nonadaptive NPqueries and perform the following tasks: • approximate counting of NPwitnesses: given a Boolean circuit A, output r such that (1 − ɛ)A −1 (1)  ≤r ≤A −1 (1).