Results 1 
6 of
6
Testing linearinvariant function isomorphism
 In ICALP
, 2013
"... Abstract. A function f: F n 2 → {−1, 1} is called linearisomorphic to g if f = g ◦ A for some nonsingular matrix A. In the gisomorphism problem, we want a randomized algorithm that distinguishes whether an input function f is linearisomorphic to g or far from being so. We show that the query com ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
Abstract. A function f: F n 2 → {−1, 1} is called linearisomorphic to g if f = g ◦ A for some nonsingular matrix A. In the gisomorphism problem, we want a randomized algorithm that distinguishes whether an input function f is linearisomorphic to g or far from being so. We show that the query complexity to test gisomorphism is essentially determined by the spectral norm of g. That is, if g is close to having spectral norm s, then we can test gisomorphism with poly(s) queries, and if g is far from having spectral norm s, then we cannot test gisomorphism with o(log s) queries. The upper bound is almost tight since there is indeed a function g close to having spectral norm s whereas testing gisomorphism requires Ω(s) queries. As far as we know, our result is the first characterization of this type for functions. Our upper bound is essentially the KushilevitzMansour learning algorithm, modified for use in the implicit setting. Exploiting our upper bound, we show that any property is testable if it can be wellapproximated by functions with small spectral norm. We also extend our algorithm to the setting where A is allowed to be singular. 1
A characterization of locally testable affineinvariant properties via decomposition theorems
 In Proceedings of the 46th Annual ACM Symposium on Theory of Computing (STOC
, 2014
"... ar ..."
(Show Context)
Algorithmic regularity for polynomials and applications
, 2013
"... In analogy with the regularity lemma of Szemerédi [Sze75], regularity lemmas for polynomials shown by Green and Tao [GT09] and by Kaufman and Lovett [KL08] give a way of modifying a given collection of polynomials F = {P1,..., Pm} to a new collection F ′ so that the polynomials in F ′ are “pseudora ..."
Abstract
 Add to MetaCart
In analogy with the regularity lemma of Szemerédi [Sze75], regularity lemmas for polynomials shown by Green and Tao [GT09] and by Kaufman and Lovett [KL08] give a way of modifying a given collection of polynomials F = {P1,..., Pm} to a new collection F ′ so that the polynomials in F ′ are “pseudorandom”. These lemmas have various applications, such as (special cases) of ReedMuller testing and worstcase to averagecase reductions for polynomials. However, the transformation from F to F ′ is not algorithmic for either regularity lemma. We define new notions of regularity for polynomials, which are analogous to the above, but which allow for an efficient algorithm to compute the pseudorandom collection F ′. In particular, when the field is of high characteristic, in polynomial time, we can refine F into F ′ where every nonzero linear combination of polynomials in F ′ has desirably small Gowers norm. Using the algorithmic regularity lemmas, we show that if a polynomial P of degree d is within (normalized) Hamming distance 1 − 1F  − ε of some unknown polynomial of degree k over a prime field F (for k < d < F), then there is an efficient algorithm for finding a degreek polynomial Q, which is within distance 1 − 1F  − η of P, for some η depending on ε. This can be thought of as decoding the ReedMuller code of order k beyond the list decoding radius, in the sense of finding one close codeword, when the received word P itself is a polynomial (of degree larger than k but smaller than F). We also obtain an algorithmic version of the worstcase to averagecase reductions by Kaufman and Lovett [KL08]. They show that if a polynomial of degree d can be weakly approximated by a polynomial of lower degree, then it can be computed exactly using a collection of polynomials of degree at most d − 1. We give an efficient (randomized) algorithm to find this collection.