Results 1  10
of
1,118
Quantum complexity theory
 in Proc. 25th Annual ACM Symposium on Theory of Computing, ACM
, 1993
"... Abstract. In this paper we study quantum computation from a complexity theoretic viewpoint. Our first result is the existence of an efficient universal quantum Turing machine in Deutsch’s model of a quantum Turing machine (QTM) [Proc. Roy. Soc. London Ser. A, 400 (1985), pp. 97–117]. This constructi ..."
Abstract

Cited by 582 (5 self)
 Add to MetaCart
(Show Context)
Abstract. In this paper we study quantum computation from a complexity theoretic viewpoint. Our first result is the existence of an efficient universal quantum Turing machine in Deutsch’s model of a quantum Turing machine (QTM) [Proc. Roy. Soc. London Ser. A, 400 (1985), pp. 97–117]. This construction is substantially more complicated than the corresponding construction for classical Turing machines (TMs); in fact, even simple primitives such as looping, branching, and composition are not straightforward in the context of quantum Turing machines. We establish how these familiar primitives can be implemented and introduce some new, purely quantum mechanical primitives, such as changing the computational basis and carrying out an arbitrary unitary transformation of polynomially bounded dimension. We also consider the precision to which the transition amplitudes of a quantum Turing machine need to be specified. We prove that O(log T) bits of precision suffice to support a T step computation. This justifies the claim that the quantum Turing machine model should be regarded as a discrete model of computation and not an analog one. We give the first formal evidence that quantum Turing machines violate the modern (complexity theoretic) formulation of the Church–Turing thesis. We show the existence of a problem, relative to an oracle, that can be solved in polynomial time on a quantum Turing machine, but requires superpolynomial time on a boundederror probabilistic Turing machine, and thus not in the class BPP. The class BQP of languages that are efficiently decidable (with small errorprobability) on a quantum Turing machine satisfies BPP ⊆ BQP ⊆ P ♯P. Therefore, there is no possibility of giving a mathematical proof that quantum Turing machines are more powerful than classical probabilistic Turing machines (in the unrelativized setting) unless there is a major breakthrough in complexity theory.
Strengths and Weaknesses of quantum computing
 SIAM JOURNAL OF COMPUTATION
, 1997
"... Recently a great deal of attention has been focused on quantum computation following a ..."
Abstract

Cited by 386 (10 self)
 Add to MetaCart
Recently a great deal of attention has been focused on quantum computation following a
Complexity Measures and Decision Tree Complexity: A Survey
 Theoretical Computer Science
, 2000
"... We discuss several complexity measures for Boolean functions: certificate complexity, sensitivity, block sensitivity, and the degree of a representing or approximating polynomial. We survey the relations and biggest gaps known between these measures, and show how they give bounds for the decision tr ..."
Abstract

Cited by 205 (17 self)
 Add to MetaCart
(Show Context)
We discuss several complexity measures for Boolean functions: certificate complexity, sensitivity, block sensitivity, and the degree of a representing or approximating polynomial. We survey the relations and biggest gaps known between these measures, and show how they give bounds for the decision tree complexity of Boolean functions on deterministic, randomized, and quantum computers. 1 Introduction Computational Complexity is the subfield of Theoretical Computer Science that aims to understand "how much" computation is necessary and sufficient to perform certain computational tasks. For example, given a computational problem it tries to establish tight upper and lower bounds on the length of the computation (or on other resources, like space). Unfortunately, for many, practically relevant, computational problems no tight bounds are known. An illustrative example is the well known P versus NP problem: for all NPcomplete problems the current upper and lower bounds lie exponentially ...
Quantum lower bounds by quantum arguments
 In Proceedings of the ACM Symposium on Theory of Computing
, 2000
"... We propose a new method for proving lower bounds on quantum query algorithms. Instead of a classical adversary that runs the algorithm with one input and then modifies the input, we use a quantum adversary that runs the algorithm with a superposition of inputs. If the algorithm works correctly, its ..."
Abstract

Cited by 194 (18 self)
 Add to MetaCart
(Show Context)
We propose a new method for proving lower bounds on quantum query algorithms. Instead of a classical adversary that runs the algorithm with one input and then modifies the input, we use a quantum adversary that runs the algorithm with a superposition of inputs. If the algorithm works correctly, its state becomes entangled with the superposition over inputs. We bound the number of queries needed to achieve a sufficient entanglement and this implies a lower bound on the number of queries for the computation. Using this method, we prove two new Ω ( √ N) lower bounds on computing AND of ORs and inverting a permutation and also provide more uniform proofs for several known lower bounds which have been previously proven via variety of different techniques. 1
Quantum walk algorithms for element distinctness
 In: 45th Annual IEEE Symposium on Foundations of Computer Science, OCT 1719, 2004. IEEE Computer Society Press, Los Alamitos, CA
, 2004
"... We use quantum walks to construct a new quantum algorithm for element distinctness and its generalization. For element distinctness (the problem of finding two equal items among N given items), we get an O(N 2/3) query quantum algorithm. This improves the previous O(N 3/4) quantum algorithm of Buhrm ..."
Abstract

Cited by 174 (14 self)
 Add to MetaCart
(Show Context)
We use quantum walks to construct a new quantum algorithm for element distinctness and its generalization. For element distinctness (the problem of finding two equal items among N given items), we get an O(N 2/3) query quantum algorithm. This improves the previous O(N 3/4) quantum algorithm of Buhrman et al. [11] and matches the lower bound by [1]. We also give an O(N k/(k+1) ) query quantum algorithm for the generalization of element distinctness in which we have to find k equal items among N items. 1
Quantum amplitude amplification and estimation
, 2002
"... Abstract. Consider a Boolean function χ: X → {0, 1} that partitions set X between its good and bad elements, where x is good if χ(x) = 1 and bad otherwise. Consider also a quantum algorithm A such that A0 〉 = � x∈X αxx 〉 is a quantum superposition of the elements of X, and let a denote the proba ..."
Abstract

Cited by 172 (14 self)
 Add to MetaCart
(Show Context)
Abstract. Consider a Boolean function χ: X → {0, 1} that partitions set X between its good and bad elements, where x is good if χ(x) = 1 and bad otherwise. Consider also a quantum algorithm A such that A0 〉 = � x∈X αxx 〉 is a quantum superposition of the elements of X, and let a denote the probability that a good element is produced if A0 〉 is measured. If we repeat the process of running A, measuring the output, and using χ to check the validity of the result, we shall expect to repeat 1/a times on the average before a solution is found. Amplitude amplification is a process that allows to find a good x after an expected number of applications of A and its inverse which is proportional to 1 / √ a, assuming algorithm A makes no measurements. This is a generalization of Grover’s searching algorithm in which A was restricted to producing an equal superposition of all members of X and we had a promise that a single x existed such that χ(x) = 1. Our algorithm works whether or not the value of a is known ahead of time. In case the value of a is known, we can find a good x after a number of applications of A and its inverse which is proportional to 1 / √ a even in the worst case. We show that this quadratic speedup can also be obtained for a large family of search problems for which good classical heuristics exist. Finally, as our main result, we combine ideas from Grover’s and Shor’s quantum algorithms to perform amplitude estimation, a process that allows to estimate the value of a. We apply amplitude estimation to the problem of approximate counting, in which we wish to estimate the number of x ∈ X such that χ(x) = 1. We obtain optimal quantum algorithms in a variety of settings. 1.
Quantum vs. classical communication and computation
 Proc. 30th Ann. ACM Symp. on Theory of Computing (STOC ’98
, 1998
"... We present a simple and general simulation technique that transforms any blackbox quantum algorithm (à la Grover’s database search algorithm) to a quantum communication protocol for a related problem, in a way that fully exploits the quantum parallelism. This allows us to obtain new positive and ne ..."
Abstract

Cited by 159 (15 self)
 Add to MetaCart
(Show Context)
We present a simple and general simulation technique that transforms any blackbox quantum algorithm (à la Grover’s database search algorithm) to a quantum communication protocol for a related problem, in a way that fully exploits the quantum parallelism. This allows us to obtain new positive and negative results. The positive results are novel quantum communication protocols that are built from nontrivial quantum algorithms via this simulation. These protocols, combined with (old and new) classical lower bounds, are shown to provide the first asymptotic separation results between the quantum and classical (probabilistic) twoparty communication complexity models. In particular, we obtain a quadratic separation for the boundederror model, and an exponential separation for the zeroerror model. The negative results transform known quantum communication lower bounds to computational lower bounds in the blackbox model. In particular, we show that the quadratic speedup achieved by Grover for the OR function is impossible for the PARITY function or the MAJORITY function in the boundederror model, nor is it possible for the OR function itself in the exact case. This dichotomy naturally suggests a study of boundeddepth predicates (i.e. those in the polynomial hierarchy) between OR and MAJORITY. We present blackbox algorithms that achieve near quadratic speed up for all such predicates.