Results 1  10
of
23
Extensions to the Method of Multiplicities, with applications to Kakeya Sets and Mergers
, 2009
"... We extend the “method of multiplicities ” to get the following results, of interest in combinatorics and randomness extraction. 1. We show that every Kakeya set in F n q, the ndimensional vector space over the finite field on q elements, must be of size at least q n /2 n. This bound is tight to wit ..."
Abstract

Cited by 14 (5 self)
 Add to MetaCart
We extend the “method of multiplicities ” to get the following results, of interest in combinatorics and randomness extraction. 1. We show that every Kakeya set in F n q, the ndimensional vector space over the finite field on q elements, must be of size at least q n /2 n. This bound is tight to within a 2 + o(1) factor for every n as q → ∞. 2. We give improved “randomness mergers”, i.e., seeded functions that take as input k (possibly correlated) random variables in {0, 1} N and a short random seed and output a single random variable in {0, 1} N that is statistically close to having entropy (1−δ)·N when one of the k input variables is distributed uniformly. The seed we require is only (1/δ)·log kbits long, which significantly improves upon previous construction of mergers. The “method of multiplicities”, as used in prior work, analyzed subsets of vector spaces over finite fields by constructing somewhat low degree interpolating polynomials that vanish on every point in the subset with high multiplicity. The typical use of this method involved showing that the interpolating polynomial also vanished on some points outside the subset, and then used simple
Improved lower bound on the size of Kakeya sets over finite fields
, 2008
"... In a recent breakthrough, Dvir showed that every Kakeya set in F n must be of cardinality at least cnF  n where cn ≈ 1/n!. We improve this lower bound to β n F  n for a constant β> 0. This pins down the growth of the leading constant to the right form as a function of n. Let F be a finite field ..."
Abstract

Cited by 10 (3 self)
 Add to MetaCart
In a recent breakthrough, Dvir showed that every Kakeya set in F n must be of cardinality at least cnF  n where cn ≈ 1/n!. We improve this lower bound to β n F  n for a constant β> 0. This pins down the growth of the leading constant to the right form as a function of n. Let F be a finite field of q elements. Definition 1 (Kakeya Set) A set K ⊆ F n is said to be a Kakeya set in F n, if for every b ∈ F n, there exists a point a ∈ F n such that for every t ∈ F, the point a + t · b ∈ K. We show: Theorem 2 There exist constants c0, c1> 0 such that for all n, if K is a Kakeya set in F n then K  ≥ c0 · (c1 · q) n. Remark Our proofs give some tradeoffs on the constants c0, c1 that are achievable. We comment on the constants at the end of the paper. The question of establishing lower bounds on the size of Kakeya sets was posed in Wolff [7]. Till recently, the best known lower bound on the size of Kakeya sets was of the form q αn for some α < 1. In a recent breakthrough Dvir [1] showed that every Kakeya set must have cardinality at least cnq n for cn = (n!) −1.
THE ENDPOINT CASE OF THE BENNETTCARBERYTAO MULTILINEAR KAKEYA CONJECTURE
, 811
"... Abstract. We prove the endpoint case of the multilinear Kakeya conjecture of Bennett, Carbery, and Tao. The proof uses the polynomial method introduced by Dvir. In [1], Bennett, Carbery, and Tao formulated a multilinear Kakeya conjecture, and they proved the conjecture except for the endpoint case. ..."
Abstract

Cited by 6 (1 self)
 Add to MetaCart
Abstract. We prove the endpoint case of the multilinear Kakeya conjecture of Bennett, Carbery, and Tao. The proof uses the polynomial method introduced by Dvir. In [1], Bennett, Carbery, and Tao formulated a multilinear Kakeya conjecture, and they proved the conjecture except for the endpoint case. In this paper, we slightly sharpen their result by proving the endpoint case of the conjecture. Our method of proof is very different from the proof of Bennett, Carbery, and Tao. The original proof was based on monotonicity estimates for heat flows. In 2007, Dvir [2] made a breakthrough on the Kakeya problem, proving the Kakeya conjecture over finite fields. His proof used polynomials in a crucial way. It was not clear whether Dvir’s approach could be adapted to prove estimates in Euclidean space. Our proof of the multilinear Kakeya conjecture is based on Dvir’s polynomial method. In my opinion, the method of proof is as interesting as the result. The multilinear Kakeya conjecture concerns the overlap properties of cylindrical tubes in R n. Roughly, the (multilinear) Kakeya conjecture says that cylinders pointing in different directions cannot overlap too much. Before coming to the BennettCarberyTao multilinear estimate, I want to state a weaker result, because it’s easier to understand and easier to prove. To be clear about the notation, a cylinder of radius R around a line L ⊂ R n is the set of all points x ∈ R n within a distance R of the line L. We call the line L the core of the cylinder. Theorem 1. Suppose we have a finite collection of cylinders Tj,a ⊂ R n, where 1 ≤ j ≤ n, and 1 ≤ a ≤ A for some integer A. Each cylinder has radius 1. Moreover, each cylinder Tj,a runs nearly parallel to the xjaxis. More precisely, we assume that the angle between the core of Tj,a and the xjaxis is at most (100n) −1. We let I be the set of points that belong to at least one cylinder in each direction. In symbols, Then V ol(I) ≤ C(n)A n n−1.
On Lines and Joints
, 2009
"... Let L be a set of n lines in R d, for d ≥ 3. A joint of L is a point incident to at least d lines of L, not all in a common hyperplane. Using a very simple algebraic proof technique, we show that the maximum possible number of joints of L is Θ(n d/(d−1)). For d = 3, this is a considerable simplifica ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
Let L be a set of n lines in R d, for d ≥ 3. A joint of L is a point incident to at least d lines of L, not all in a common hyperplane. Using a very simple algebraic proof technique, we show that the maximum possible number of joints of L is Θ(n d/(d−1)). For d = 3, this is a considerable simplification of the orignal algebraic proof of Guth and Katz [9], and of the followup simpler proof of Elekes et al. [6]. Some extensions, e.g., to the case of joints of algebraic curves, are also presented.
The Kakeya set and maximal conjectures for algebraic varieties over finite fields
"... Abstract. Using the polynomial method of Dvir [5], we establish optimal estimates for Kakeya sets and Kakeya maximal functions associated to algebraic varieties W over finite fields F. For instance, given an n−1dimensional projective variety W ⊂ Pn (F), we establish the Kakeya maximal estimate ‖ su ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
Abstract. Using the polynomial method of Dvir [5], we establish optimal estimates for Kakeya sets and Kakeya maximal functions associated to algebraic varieties W over finite fields F. For instance, given an n−1dimensional projective variety W ⊂ Pn (F), we establish the Kakeya maximal estimate ‖ sup
Additive Combinatorics and Theoretical Computer Science
, 2009
"... Additive combinatorics is the branch of combinatorics where the objects of study are subsets of the integers or of other abelian groups, and one is interested in properties and patterns that can be expressed in terms of linear equations. More generally, arithmetic combinatorics deals with properties ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
Additive combinatorics is the branch of combinatorics where the objects of study are subsets of the integers or of other abelian groups, and one is interested in properties and patterns that can be expressed in terms of linear equations. More generally, arithmetic combinatorics deals with properties and patterns that can be expressed via additions and multiplications. In the past ten years, additive and arithmetic combinatorics have been extremely successful areas of mathematics, featuring a convergence of techniques from graph theory, analysis and ergodic theory. They have helped prove longstanding open questions in additive number theory, and they offer much promise of future progress. Techniques from additive and arithmetic combinatorics have found several applications in computer science too, to property testing, pseudorandomness, PCP constructions, lower bounds, and extractor constructions. Typically, whenever a technique from additive or arithmetic combinatorics becomes understood by computer scientists, it finds some application. Considering that there is still a lot of additive and arithmetic combinatorics that computer scientists do not understand (and, the field being very active, even more will be developed in the near future), there seems to be much potential for future connections and applications.
New affineinvariant codes from lifting
 Electronic Colloquium on Computational Complexity (ECCC
, 2012
"... In this work we explore errorcorrecting codes derived from the “lifting ” of “affineinvariant” codes. Affineinvariant codes are simply linear codes whose coordinates are a vector space over a field and which are invariant under affinetransformations of the coordinate space. Lifting takes codes d ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
In this work we explore errorcorrecting codes derived from the “lifting ” of “affineinvariant” codes. Affineinvariant codes are simply linear codes whose coordinates are a vector space over a field and which are invariant under affinetransformations of the coordinate space. Lifting takes codes defined over a vector space of small dimension and lifts them to higher dimensions by requiring their restriction to every subspace of the original dimension to be a codeword of the code being lifted. While the operation is of interest on its own, this work focusses on new ranges of parameters that can be obtained by such codes, in the context of local correction and testing. In particular we present four interesting ranges of parameters that can be achieved by such lifts, all of which are new in the context of affineinvariance and some may be new even in general. The main highlight is a construction of highrate codes with sublinear time decoding. The only prior construction of such codes is due to Kopparty, Saraf and Yekhanin [33]. All our codes are extremely simple, being just lifts of various parity check codes (codes with one symbol of redundancy), and in the final case, the lift of a ReedSolomon code. We also present a simple connection between certain lifted codes and lower bounds on the size of “Nikodym sets”. Roughly, a Nikodym set in Fm q is a set S with the property that every point has a line passing through it which is almost entirely contained in S. While previous lower bounds on Nikodym sets were roughly growing as qm /2m, we use our lifted codes to prove a lower bound of (1 − o(1))qm for fields of constant characteristic.
Bridging Shannon and Hamming: List ErrorCorrection with Optimal Rate
"... Abstract. Errorcorrecting codes tackle the fundamental problem of recovering from errors during data communication and storage. A basic issue in coding theory concerns the modeling of the channel noise. Shannon’s theory models the channel as a stochastic process with a known probability law. Hammin ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Abstract. Errorcorrecting codes tackle the fundamental problem of recovering from errors during data communication and storage. A basic issue in coding theory concerns the modeling of the channel noise. Shannon’s theory models the channel as a stochastic process with a known probability law. Hamming suggested a combinatorial approach where the channel causes worstcase errors subject only to a limit on the number of errors. These two approaches share a lot of common tools, however in terms of quantitative results, the classical results for worstcase errors were much weaker. We survey recent progress on list decoding, highlighting its power and generality as an avenue to construct codes resilient to worstcase errors with information rates similar to what is possible against probabilistic errors. In particular, we discuss recent explicit constructions of listdecodable codes with informationtheoretically optimal redundancy that is arbitrarily close to the fraction of symbols that can be corrupted by worstcase errors.