Results 1  10
of
22
PolynomialTime Algorithms for Prime Factorization and Discrete Logarithms on a Quantum Computer
 SIAM J. on Computing
, 1997
"... A digital computer is generally believed to be an efficient universal computing device; that is, it is believed able to simulate any physical computing device with an increase in computation time by at most a polynomial factor. This may not be true when quantum mechanics is taken into consideration. ..."
Abstract

Cited by 1278 (4 self)
 Add to MetaCart
A digital computer is generally believed to be an efficient universal computing device; that is, it is believed able to simulate any physical computing device with an increase in computation time by at most a polynomial factor. This may not be true when quantum mechanics is taken into consideration. This paper considers factoring integers and finding discrete logarithms, two problems which are generally thought to be hard on a classical computer and which have been used as the basis of several proposed cryptosystems. Efficient randomized algorithms are given for these two problems on a hypothetical quantum computer. These algorithms take a number of steps polynomial in the input size, e.g., the number of digits of the integer to be factored.
On relating time and space to size and depth
 SIAM JOURNAL ON COMPUTING
, 1977
"... Turing machine space complexity is related to circuit depth complexity. The relationship complements the known connection between Turing machine time and circuit size, thus enabling us to expose the related nature of some important open problems concerning Turing machine and circuit complexity. We ..."
Abstract

Cited by 115 (1 self)
 Add to MetaCart
(Show Context)
Turing machine space complexity is related to circuit depth complexity. The relationship complements the known connection between Turing machine time and circuit size, thus enabling us to expose the related nature of some important open problems concerning Turing machine and circuit complexity. We are also able to show some connection between Turing machine complexity and arithmetic complexity.
On the complexity of numerical analysis
 IN PROC. 21ST ANN. IEEE CONF. ON COMPUTATIONAL COMPLEXITY (CCC ’06
, 2006
"... We study two quite different approaches to understanding the complexity of fundamental problems in numerical analysis: • The BlumShubSmale model of computation over the reals. • A problem we call the “Generic Task of Numerical Computation, ” which captures an aspect of doing numerical computation ..."
Abstract

Cited by 73 (5 self)
 Add to MetaCart
(Show Context)
We study two quite different approaches to understanding the complexity of fundamental problems in numerical analysis: • The BlumShubSmale model of computation over the reals. • A problem we call the “Generic Task of Numerical Computation, ” which captures an aspect of doing numerical computation in floating point, similar to the “long exponent model ” that has been studied in the numerical computing community. We show that both of these approaches hinge on the question of understanding the complexity of the following problem, which we call PosSLP: Given a divisionfree straightline program producing an integer N, decide whether N> 0. • In the BlumShubSmale model, polynomial time computation over the reals (on discrete inputs) is polynomialtime equivalent to PosSLP, when there are only algebraic constants. We conjecture that using transcendental constants provides no additional power, beyond nonuniform reductions to PosSLP, and we present some preliminary results supporting this conjecture. • The Generic Task of Numerical Computation is also polynomialtime equivalent to PosSLP. We prove that PosSLP lies in the counting hierarchy. Combining this with work of Tiwari, we obtain that the Euclidean Traveling Salesman Problem lies in the counting hierarchy – the previous best upper bound for this important problem (in terms of classical complexity classes) being PSPACE. In the course of developing the context for our results on arithmetic circuits, we present some new observations on the complexity of ACIT: the Arithmetic Circuit Identity Testing problem. In particular, we show that if n! is not ultimately easy, then ACIT has subexponential complexity.
Computational Complexity of an Optical Model of Computation
, 2005
"... We investigate the computational complexity of an optically inspired model of computation. The model is called the continuous space machine and operates in discrete timesteps over a number of twodimensional complexvalued images of constant size and arbitrary spatial resolution. We define a number ..."
Abstract

Cited by 6 (6 self)
 Add to MetaCart
We investigate the computational complexity of an optically inspired model of computation. The model is called the continuous space machine and operates in discrete timesteps over a number of twodimensional complexvalued images of constant size and arbitrary spatial resolution. We define a number of optically inspired complexity measures and data representations for the model. We show the growth of each complexity measure under each of the model's operations. We characterise the power of an important discrete restriction of the model. Parallel time on this variant of the model is shown to correspond, within a polynomial, to sequential space on Turing machines, thus verifying the parallel computation thesis. We also give a characterisation of the class NC. As a result the model has computational power equivalent to that of many wellknown parallel models. These characterisations give a method to translate parallel algorithms to optical algorithms and facilitate the application of the complexity theory toolbox to optical computers. Finally we show that another variation on the model is very powerful;
Machine Models and Linear Time Complexity
 SIGACT News
, 1993
"... wer bounds. Machine models. Suppose that for every machine M 1 in model M 1 running in time t = t(n) there is a machine M 2 in M 2 which computes the same partial function in time g = g(t; n). If g = O(t)+O(n) we say that model M 2 simulates M 1 linearly. If g = O(t) the simulation has constantf ..."
Abstract

Cited by 5 (3 self)
 Add to MetaCart
(Show Context)
wer bounds. Machine models. Suppose that for every machine M 1 in model M 1 running in time t = t(n) there is a machine M 2 in M 2 which computes the same partial function in time g = g(t; n). If g = O(t)+O(n) we say that model M 2 simulates M 1 linearly. If g = O(t) the simulation has constantfactor overhead ; if g = O(t log t) it has a factorofO(log t) overhead , and so on. The simulation is online if each step of M 1 i
Optical computing
, 2008
"... We consider optical computers that encode data using images and compute by transforming such images. We give an overview of a number of such optical computing architectures, including descriptions of the type of hardware commonly used in optical computing, as well as some of the computational effici ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
We consider optical computers that encode data using images and compute by transforming such images. We give an overview of a number of such optical computing architectures, including descriptions of the type of hardware commonly used in optical computing, as well as some of the computational efficiencies of optical devices. We go on to discuss optical computing from the point of view of computational complexity theory, with the aim of putting some old, and some very recent, results in context. Finally, we focus on a particular optical model of computation called the continuous space machine. We describe some results for this model including characterisations in terms of wellknown complexity classes.
Array Processing Machines
 Budach (Ed.), Fundamentals of Computational Theory 1985, Cottbus GDR, SpringerVerlag, LNCS 199
, 1984
"... We present a new model of parallel computation called the "array processing machine" or APM (for short). The APM was designed to closely model the architecture of existing vector and array proces sots, and to provide a suitable unifying framework for the complexity theory of parallel com ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
We present a new model of parallel computation called the "array processing machine" or APM (for short). The APM was designed to closely model the architecture of existing vector and array proces sots, and to provide a suitable unifying framework for the complexity theory of parallel combinatorial and numerical algorithms. After an introduction to the model and its basic programming techniques, we show that the APM can efficiently simulate a variety of extant models of parallel computation and vector processing. In particular it is shown that APMs satisfy Ooldschlager's "parallel computation thesis".
Computing with and without arbitrary large numbers
 Theory and Applications of Models of Computation, 10th International Conference, TAMC 2013, Hong Kong
"... ..."
(Show Context)
Measuring 4local nqubit observables could probabilistically solve PSPACE
, 2003
"... We consider a hypothetical apparatus that implements measurements for arbitrary 4local quantum observables A on n qubits. The apparatus implements the “measurement algorithm ” after receiving a classical description of A. We show that a few precise measurements, applied to a basis state would provi ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
(Show Context)
We consider a hypothetical apparatus that implements measurements for arbitrary 4local quantum observables A on n qubits. The apparatus implements the “measurement algorithm ” after receiving a classical description of A. We show that a few precise measurements, applied to a basis state would provide a probabilistic solution of PSPACE problems. The error probability decreases exponentially with the number of runs if the measurement accuracy is of the order of the spectral gaps of A. Moreover, every decision problem which can be solved on a quantum computer in T time steps can be encoded into a 4local observable such that the solution requires only measurements of accuracy O(1/T). Provided that BQP̸=PSPACE, our result shows that efficient algorithms for precise measurements of general 4local observables cannot exist. We conjecture that the class of physically existing interactions is large enough to allow the conclusion that precise energy measurements for general manyparticle systems require control algorithms with high complexity.