Results 1  10
of
11
HardnessRandomness Tradeoffs for Bounded Depth Arithmetic Circuits
"... In this paper we show that lower bounds for bounded depth arithmetic circuits imply derandomization of polynomial identity testing for bounded depth arithmetic circuits. More formally, if there exists an explicit polynomial f(x1,..., xm) that cannot be computed by a depth d arithmetic circuit of sma ..."
Abstract

Cited by 13 (2 self)
 Add to MetaCart
In this paper we show that lower bounds for bounded depth arithmetic circuits imply derandomization of polynomial identity testing for bounded depth arithmetic circuits. More formally, if there exists an explicit polynomial f(x1,..., xm) that cannot be computed by a depth d arithmetic circuit of small size then there exists an efficient deterministic algorithm to test whether a given depth d − 8 circuit is identically zero or not (assuming the individual degrees of the tested circuit are not too high). In particular, if we are guaranteed that the tested circuit computes a multilinear polynomial then we can perform the identity test efficiently. To the best of our knowledge this is the first hardnessrandomness tradeoff for bounded depth arithmetic circuits. The above results are obtained using the the arithmetic NisanWigderson generator of [KI04] together with a new theorem on bounded depth circuits, which is the main technical contribution of our work. This theorem deals with polynomial equations of the form P (x1,..., xn, y) ≡ 0 and shows that if P has a circuit of depth d and size s and if the polynomial f(x1,..., xn) satisfies P (x1,..., xn, f(x1,..., xn)) ≡ 0 then f has a circuit of depth d + 3 and size O(s · r + m r), where m is the total degree of f and r is the degree of y in P.
Readonce Polynomial Identity Testing
"... An arithmetic readonce formula (ROF for short) is a formula (a circuit in which the fanout of every gate is at most 1) in which the operations are {+, ×} and such that every input variable labels at most one leaf. In this paper we study the problems of identity testing and reconstruction of readon ..."
Abstract

Cited by 11 (4 self)
 Add to MetaCart
An arithmetic readonce formula (ROF for short) is a formula (a circuit in which the fanout of every gate is at most 1) in which the operations are {+, ×} and such that every input variable labels at most one leaf. In this paper we study the problems of identity testing and reconstruction of readonce formulas. the following are some of the results that we obtain. 1. Given k ROFs in n variables, over a field F, we give a deterministic (non blackbox) algorithm that checks whether they sum to zero or not. The running time of the algorithm is n O(k2). 2. We give an n O(d+k2) time deterministic algorithm for checking whether a black box holding the sum of k depth d ROFs in n variables computes the zero polynomial. In other words, we provide a hitting set of size n O(d+k2) for the sum of k depth d ROFs. If F  is too small then we make queries from a polynomial size extension field. This implies a deterministic algorithm that runs in time n O(d) for the reconstruction of depth d ROFs. 3. We give a hitting set of size exp ( Õ( √ n + k 2)) for the sum of k ROFs (without depth restrictions). In particular this implies a subexponential time deterministic algorithm for
TensorRank and Lower Bounds for Arithmetic Formulas
"... We show that any explicit example for a tensor A: [n] r → F with tensorrank ≥ nr·(1−o(1)) , (where r = r(n) ≤ log n / log log n), implies an explicit superpolynomial lower bound for the size of general arithmetic formulas over F. This shows that strong enough lower bounds for the size of arithmet ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
We show that any explicit example for a tensor A: [n] r → F with tensorrank ≥ nr·(1−o(1)) , (where r = r(n) ≤ log n / log log n), implies an explicit superpolynomial lower bound for the size of general arithmetic formulas over F. This shows that strong enough lower bounds for the size of arithmetic formulas of depth 3 imply superpolynomial lower bounds for the size of general arithmetic formulas. One component of our proof is a new approach for homogenization and multilinearization of arithmetic formulas, that gives the following results: We show that for any nvariate homogenous polynomial f of degree r, if there exists a (fanin2) ( formula of size s and depth d for f then there exists a homogenous (d+r+1)) formula of size O r · s for f. In particular, for any r ≤ log n, if there exists a polynomial size formula for f then there exists a polynomial size homogenous formula for f. This refutes a conjecture of Nisan and Wigderson [NW95] and shows that superpolynomial lower bounds for homogenous formulas for polynomials of small degree imply superpolynomial lower bounds for general formulas. We show that for any nvariate setmultilinear polynomial f of degree r, if there exists a (fanin2) formula of size s and depth d for f then there exists a setmultilinear formula of size O ((d + 2) r · s) for f. In particular, for any r ≤ log n / log log n, if there exists a polynomial size formula for f then there exists a polynomial size setmultilinear formula for f. This shows that superpolynomial lower bounds for setmultilinear formulas for polynomials of small degree imply superpolynomial lower bounds for general formulas.
Geometric Complexity Theory V: Equivalence between blackbox derandomization of polynomial identity testing and derandomization of Noether’s Normalization Lemma (Abstract) Dedicated to Sri Ramakrishna
"... Abstract—It is shown that blackbox derandomization of polynomial identity testing (PIT) is essentially equivalent to derandomization of Noether’s Normalization Lemma for explicit algebraic varieties, the problem that lies at the heart of the foundational classification problem of algebraic geometry ..."
Abstract
 Add to MetaCart
Abstract—It is shown that blackbox derandomization of polynomial identity testing (PIT) is essentially equivalent to derandomization of Noether’s Normalization Lemma for explicit algebraic varieties, the problem that lies at the heart of the foundational classification problem of algebraic geometry. Specifically: (1) It is shown that in characteristic zero blackbox derandomization of the symbolic trace identity testing (STIT) brings the problem of derandomizing Noether’s Normalization Lemma for the ring of invariants of the adjoint action of the general linear group on a tuple of matrices from EXPSPACE (where it is currently) to P. Next it is shown that assuming the Generalized Riemann Hypothesis (GRH), instead of the blackbox derandomization hypothesis, brings the problem from EXPSPACE to quasiPH, instead of P. Thus blackbox derandomization
Lecture 18 Arithmetic Complexity Instructor: Madhu Sudan
, 2012
"... 2. Basic problems and results. Most of the material of today’s lecture is covered in the survey of [SY10] (see link in the course website). 2 Problems We will be interested in two types of problems: 1. Computing a function φ: F n → F m of the form φ = (φ1,..., φm) where φi ∈ F[x1,..., xn] is a polyn ..."
Abstract
 Add to MetaCart
2. Basic problems and results. Most of the material of today’s lecture is covered in the survey of [SY10] (see link in the course website). 2 Problems We will be interested in two types of problems: 1. Computing a function φ: F n → F m of the form φ = (φ1,..., φm) where φi ∈ F[x1,..., xn] is a polynomial. (For example, computing the determinant or the permanent of an n × n matrix). 2. Given φ: F n × F m → F l and given x ∈ F n, finding y ∈ F m such that φ(x, y) = 0. (This problem is similar to Hilbert Nullstellensatz). 3 Arithmetic Circuits Also known as straightline programs, this is a natural arithmetic model of computation (similar to boolean circuits). Definition 3.1 (Informal). An arithmetic circuit C over a field F consists of the following: Input variables: x1,..., xn. Gates: A list of gates of the form “yi ← A ⋄ B ” where ⋄ ∈ {+, −, ∗, ÷} and A, B ∈
(1.1) 1/q correlation
, 2013
"... We draw two incomplete, biased maps of challenges in computational complexity lower bounds. Our aim is to put these challenges in perspective, and to present some connections which do not seem widely known. We do not survey existing lower bounds, go through the history, or repeat standard definition ..."
Abstract
 Add to MetaCart
We draw two incomplete, biased maps of challenges in computational complexity lower bounds. Our aim is to put these challenges in perspective, and to present some connections which do not seem widely known. We do not survey existing lower bounds, go through the history, or repeat standard definitions. All of this can be found e.g. in the recent book [Juk12], or in the books and
Algebraic Complexity Classes
, 2012
"... In this survey, I am going to try and describe the algebraic complexity framework originally proposed by Leslie Valiant [Val79, Val82], and the insights that have been obtained more recently. This entire article has an “as it appeals to me ” flavour, but I hope this flavour will also be interesting ..."
Abstract
 Add to MetaCart
In this survey, I am going to try and describe the algebraic complexity framework originally proposed by Leslie Valiant [Val79, Val82], and the insights that have been obtained more recently. This entire article has an “as it appeals to me ” flavour, but I hope this flavour will also be interesting to many readers. The article is not particularly indepth, but it is
Verifiable Delegation of Computation on Outsourced Data............................. 1
"... Abstract. We address the problem in which a client stores a large amount of data with an untrusted server in such a way that, at any moment, the client can ask the server to compute a function on some portion of its outsourced data. In this scenario, the client must be able to efficiently verify the ..."
Abstract
 Add to MetaCart
Abstract. We address the problem in which a client stores a large amount of data with an untrusted server in such a way that, at any moment, the client can ask the server to compute a function on some portion of its outsourced data. In this scenario, the client must be able to efficiently verify the correctness of the result despite no longer knowing the inputs of the delegated computation, it must be able to keep adding elements to its remote storage, and it does not have to fix in advance (i.e., at data outsourcing time) the functions that it will delegate. Even more ambitiously, clients should be able to verify in time independent of the inputsize – a very appealing property for computations over huge amounts of data. In this work we propose novel cryptographic techniques that solve the above problem for the class of computations of quadratic polynomials over a large number of variables. This class covers a wide range of significant arithmetic computations – notably, many important statistics. To confirm the efficiency of our solution, we show encouraging performance results, e.g., correctness proofs have size below 1 kB
Pivoted at NEXP ⊂ ACC 0
"... A couple of years ago, Ryan Williams settled a long standing open problem by showing that NEXP ⊂ ACC 0. To obtain this result, Williams applied an abundant of classical as well as more recent results from complexity theory. In particular, beautiful results concerning the tradeoffs between hardness ..."
Abstract
 Add to MetaCart
A couple of years ago, Ryan Williams settled a long standing open problem by showing that NEXP ⊂ ACC 0. To obtain this result, Williams applied an abundant of classical as well as more recent results from complexity theory. In particular, beautiful results concerning the tradeoffs between hardness and randomness were used. Some of the required building blocks for the proof, such as IP = PSPACE, Toda’s Theorem and the NisanWigderson pseduorandom generator, are welldocumented in standard books on complexity theory, but others, such as the beautiful ImpagliazzoKabanetsWigderson Theorem, are not. In this course we present Williams ’ proof assuming a fairly standard knowledge in complexity theory. More precisely, only an undergraduatelevel background in complexity (namely, Turing machines, “standard ” complexity classes, reductions and completeness) is assumed, but we also build upon several wellknown and welldocumented results such as the above in a blackbox fashion. On the other hand, we allow ourselves to stray and discuss related topics, not used in Williams ’ proof. In particular, we cannot help but spending the last two lectures on matrix rigidity, which is related to a classical wideopen problem in circuit complexity. I am thankful to all of the students for attending the course, conducting interesting discussions, and scribing the lecture notes (and for putting up with endless iterations):