Results 11  20
of
313
Greatest Common Divisors of Polynomials Given by StraightLine Programs
 J. ACM
, 1988
"... . F Algorithms on multivariate polynomials represented by straightline programs are developed irst it is shown that most algebraic algorithms can be probabilistically applied to data that is given by y r a straightline computation. Testing such rational numeric data for zero, for instance, is faci ..."
Abstract

Cited by 51 (17 self)
 Add to MetaCart
. F Algorithms on multivariate polynomials represented by straightline programs are developed irst it is shown that most algebraic algorithms can be probabilistically applied to data that is given by y r a straightline computation. Testing such rational numeric data for zero, for instance, is facilitated b andom evaluations modulo random prime numbers. Then auxiliary algorithms are constructed that a determine the coefficients of a multivariate polynomial in a single variable. The first main result is an lgorithm that produces the greatest common divisor of the input polynomials, all in straightline r a representation. The second result shows how to find a straightline program for the reduced numerato nd denominator from one for the corresponding rational function. Both the algorithm for that conl c struction and the greatest common divisor algorithm are in random polynomialtime for the usua oefficient fields and output a straightline program, which with controllably high probab...
Fast parallel circuits for the quantum Fourier transform
 PROCEEDINGS 41ST ANNUAL SYMPOSIUM ON FOUNDATIONS OF COMPUTER SCIENCE (FOCS’00)
, 2000
"... We give new bounds on the circuit complexity of the quantum Fourier transform (QFT). We give an upper bound of O(log n + log log(1/ε)) on the circuit depth for computing an approximation of the QFT with respect to the modulus 2 n with error bounded by ε. Thus, even for exponentially small error, our ..."
Abstract

Cited by 50 (2 self)
 Add to MetaCart
We give new bounds on the circuit complexity of the quantum Fourier transform (QFT). We give an upper bound of O(log n + log log(1/ε)) on the circuit depth for computing an approximation of the QFT with respect to the modulus 2 n with error bounded by ε. Thus, even for exponentially small error, our circuits have depth O(log n). The best previous depth bound was O(n), even for approximations with constant error. Moreover, our circuits have size O(n log(n/ε)). We also give an upper bound of O(n(log n) 2 log log n) on the circuit size of the exact QFT modulo 2 n, for which the best previous bound was O(n 2). As an application of the above depth bound, we show that Shor’s factoring algorithm may be based on quantum circuits with depth only O(log n) and polynomialsize, in combination with classical polynomialtime pre and postprocessing. In the language of computational complexity, this implies that factoring is in the complexity class ZPP BQNC, where BQNC is the class of problems computable with boundederror probability by quantum circuits with polylogarithmic depth and polynomial size. Finally, we prove an Ω(log n) lower bound on the depth complexity of approximations of the
On The Complexity Of Computing Determinants
 COMPUTATIONAL COMPLEXITY
, 2001
"... We present new baby steps/giant steps algorithms of asymptotically fast running time for dense matrix problems. Our algorithms compute the determinant, characteristic polynomial, Frobenius normal form and Smith normal form of a dense n n matrix A with integer entries in (n and (n bi ..."
Abstract

Cited by 47 (17 self)
 Add to MetaCart
We present new baby steps/giant steps algorithms of asymptotically fast running time for dense matrix problems. Our algorithms compute the determinant, characteristic polynomial, Frobenius normal form and Smith normal form of a dense n n matrix A with integer entries in (n and (n bit operations; here denotes the largest entry in absolute value and the exponent adjustment by "+o(1)" captures additional factors for positive real constants C 1 , C 2 , C 3 . The bit complexity (n results from using the classical cubic matrix multiplication algorithm. Our algorithms are randomized, and we can certify that the output is the determinant of A in a Las Vegas fashion. The second category of problems deals with the setting where the matrix A has elements from an abstract commutative ring, that is, when no divisions in the domain of entries are possible. We present algorithms that deterministically compute the determinant, characteristic polynomial and adjoint of A with n and O(n ) ring additions, subtractions and multiplications.
Detecting Perfect Powers In Essentially Linear Time
 Math. Comp
, 1998
"... This paper (1) gives complete details of an algorithm to compute approximate kth roots; (2) uses this in an algorithm that, given an integer n>1, either writes n as a perfect power or proves that n is not a perfect power; (3) proves, using Loxton's theorem on multiple linear forms in logari ..."
Abstract

Cited by 42 (12 self)
 Add to MetaCart
This paper (1) gives complete details of an algorithm to compute approximate kth roots; (2) uses this in an algorithm that, given an integer n>1, either writes n as a perfect power or proves that n is not a perfect power; (3) proves, using Loxton's theorem on multiple linear forms in logarithms, that this perfectpower decomposition algorithm runs in time (log n) . 1.
Serre's modularity conjecture (I)
, 2007
"... This paper is the first part of a work which proves Serre’s modularity conjecture. We first prove the cases p ̸ = 2 and odd conductor, see Theorem 1.2, modulo Theorems 4.1 and 5.1. Theorems 4.1 and 5.1 are proven in the second part, see [13]. We then reduce the general case to a modularity statement ..."
Abstract

Cited by 37 (0 self)
 Add to MetaCart
This paper is the first part of a work which proves Serre’s modularity conjecture. We first prove the cases p ̸ = 2 and odd conductor, see Theorem 1.2, modulo Theorems 4.1 and 5.1. Theorems 4.1 and 5.1 are proven in the second part, see [13]. We then reduce the general case to a modularity statement for 2adic lifts of modular mod 2 representations. This statement is now a theorem of Kisin [19].
Practical ZeroKnowledge Proofs: Giving Hints and Using Deficiencies
 JOURNAL OF CRYPTOLOGY
, 1994
"... New zeroknowledge proofs are given for some numbertheoretic problems. All of the problems are in NP, but the proofs given here are much more efficient than the previously known proofs. In addition, these proofs do not require the prover to be superpolynomial in power. A probabilistic polynomial t ..."
Abstract

Cited by 32 (0 self)
 Add to MetaCart
New zeroknowledge proofs are given for some numbertheoretic problems. All of the problems are in NP, but the proofs given here are much more efficient than the previously known proofs. In addition, these proofs do not require the prover to be superpolynomial in power. A probabilistic polynomial time prover with the appropriate trapdoor knowledge is sufficient. The proofs are perfect or statistical zeroknowledge in all cases except one.
A Lower Bound for Parallel String Matching
 SIAM J. Comput
, 1993
"... This talk presents the derivation of an\Omega\Gamma/28 log m) lower bound on the number of rounds necessary for finding occurrences of a pattern string P [1::m] in a text string T [1::2m] in parallel using m comparisons in each round. The parallel complexity of the string matching problem using p ..."
Abstract

Cited by 25 (13 self)
 Add to MetaCart
This talk presents the derivation of an\Omega\Gamma/28 log m) lower bound on the number of rounds necessary for finding occurrences of a pattern string P [1::m] in a text string T [1::2m] in parallel using m comparisons in each round. The parallel complexity of the string matching problem using p processors for general alphabets follows. 1. Introduction Better and better parallel algorithms have been designed for stringmatching. All are on CRCWPRAM with the weakest form of simultaneous write conflict resolution: all processors which write into the same memory location must write the same value of 1. The best CREWPRAM algorithms are those obtained from the CRCW algorithms for a logarithmic loss of efficiency. Optimal algorithms have been designed: O(logm) time in [8, 17] and O(log log m) time in [4]. (An optimal algorithm is one with pt = O(n) where t is the time and p is the number of processors used.) Recently, Vishkin [18] developed an optimal O(log m) time algorithm. Unlike...
On Parallel Hashing and Integer Sorting
, 1991
"... The problem of sorting n integers from a restricted range [1::m], where m is superpolynomial in n, is considered. An o(n log n) randomized algorithm is given. Our algorithm takes O(n log log m) expected time and O(n) space. (Thus, for m = n polylog(n) we have an O(n log log n) algorithm.) The al ..."
Abstract

Cited by 25 (9 self)
 Add to MetaCart
The problem of sorting n integers from a restricted range [1::m], where m is superpolynomial in n, is considered. An o(n log n) randomized algorithm is given. Our algorithm takes O(n log log m) expected time and O(n) space. (Thus, for m = n polylog(n) we have an O(n log log n) algorithm.) The algorithm is parallelizable. The resulting parallel algorithm achieves optimal speed up. Some features of the algorithm make us believe that it is relevant for practical applications. A result of independent interest is a parallel hashing technique. The expected construction time is logarithmic using an optimal number of processors, and searching for a value takes O(1) time in the worst case. This technique enables drastic reduction of space requirements for the price of using randomness. Applicability of the technique is demonstrated for the parallel sorting algorithm, and for some parallel string matching algorithms. The parallel sorting algorithm is designed for a strong and non standard mo...
Criteria For Irrationality Of Euler's Constant
 Proc. Amer. Math. Soc
"... By modifying Beukers' proof of Apry's theorem that z ( ) 3 is irrational, we derive criteria for irrationality of Euler's constant, g . For n > 0 , we define a double integral I n and a positive integer S n , and prove that with d n n = LCM( ,..., ) 1 the following are equivalent ..."
Abstract

Cited by 22 (7 self)
 Add to MetaCart
By modifying Beukers' proof of Apry's theorem that z ( ) 3 is irrational, we derive criteria for irrationality of Euler's constant, g . For n > 0 , we define a double integral I n and a positive integer S n , and prove that with d n n = LCM( ,..., ) 1 the following are equivalent. 1. The fractional part of logS n is given by {log } S d I n n n = 2 for some n . 2. The formula holds for all sufficiently large n . 3. Euler's constant is a rational number. A corollary is that if {log } S n 2 infinitely often, then g is irrational. Indeed, if the inequality holds for a given n (we present numerical evidence for 1 2500 n ) and g is rational, then its denominator does not divide d n n . We prove a new combinatorial identity in order to show that a certain linear form in logarithms is in fact logS n . A byproduct is a rapidly converging asymptotic formula for g , used by P. Sebah to compute g correct to 18063 decimals. 1.
On a problem of Byrnes concerning polynomials with restricted coefficients
 Math. Comp
, 1997
"... Abstract. We consider a question of Byrnes concerning the minimal degree n of a polynomial with all coefficients in {−1, 1} which has a zero of a given order m at x =1. Form≤5, we prove his conjecture that the monic polynomial of this type of minimal degree is given by ∏m−1 k=0 (x2k − 1), but we dis ..."
Abstract

Cited by 21 (0 self)
 Add to MetaCart
Abstract. We consider a question of Byrnes concerning the minimal degree n of a polynomial with all coefficients in {−1, 1} which has a zero of a given order m at x =1. Form≤5, we prove his conjecture that the monic polynomial of this type of minimal degree is given by ∏m−1 k=0 (x2k − 1), but we disprove this for m ≥ 6. We prove that a polynomial of this type must have n ≥ e √ m(1+o(1)) , which is in sharp contrast with the situation when one allows coefficients in {−1, 0, 1}. The proofs use simple number theoretic ideas and depend ultimately on the fact that −1 ≡ 1(mod2). 1.