Results 1  10
of
28
Boosting and differential privacy
, 2010
"... Abstract—Boosting is a general method for improving the accuracy of learning algorithms. We use boosting to construct improved privacypreserving synopses of an input database. These are data structures that yield, for a given set Q of queries over an input database, reasonably accurate estimates of ..."
Abstract

Cited by 293 (8 self)
 Add to MetaCart
Abstract—Boosting is a general method for improving the accuracy of learning algorithms. We use boosting to construct improved privacypreserving synopses of an input database. These are data structures that yield, for a given set Q of queries over an input database, reasonably accurate estimates of the responses to every query in Q, even when the number of queries is much larger than the number of rows in the database. Given a base synopsis generator that takes a distribution on Q and produces a “weak ” synopsis that yields “good ” answers for a majority of the weight in Q, our Boosting for Queries algorithm obtains a synopsis that is good for all of Q. We ensure privacy for the rows of the database, but the boosting is performed on the queries. We also provide the first synopsis generators for arbitrary sets of arbitrary lowsensitivity
Norm convergence of multiple ergodic averages for commuting transformations
, 2007
"... Let T1,..., Tl: X → X be commuting measurepreserving transformations on a probability space (X, X, µ). We show that the multiple ergodic averages 1 PN−1 N n=0 f1(T n 1 x)... fl(T n l x) are convergent in L2 (X, X, µ) as N → ∞ for all f1,..., fl ∈ L ∞ (X, X, µ); this was previously established fo ..."
Abstract

Cited by 34 (1 self)
 Add to MetaCart
Let T1,..., Tl: X → X be commuting measurepreserving transformations on a probability space (X, X, µ). We show that the multiple ergodic averages 1 PN−1 N n=0 f1(T n 1 x)... fl(T n l x) are convergent in L2 (X, X, µ) as N → ∞ for all f1,..., fl ∈ L ∞ (X, X, µ); this was previously established for l = 2 by Conze and Lesigne [2] and for general l assuming some additional ergodicity hypotheses on the maps Ti and TiT −1 j by Frantzikinakis and Kra [3] (with the l = 3 case of this result established earlier in [29]). Our approach is combinatorial and finitary in nature, inspired by recent developments regarding the hypergraph regularity and removal lemmas, although we will not need the full strength of those lemmas. In particular, the l = 2 case of our arguments are a finitary analogue of those in [2].
Computational Differential Privacy
"... The definition of differential privacy has recently emerged as a leading standard of privacy guarantees for algorithms on statistical databases. We offer several relaxations of the definition which require privacy guarantees to hold only against efficient—i.e., computationallybounded—adversaries. W ..."
Abstract

Cited by 19 (1 self)
 Add to MetaCart
The definition of differential privacy has recently emerged as a leading standard of privacy guarantees for algorithms on statistical databases. We offer several relaxations of the definition which require privacy guarantees to hold only against efficient—i.e., computationallybounded—adversaries. We establish various relationships among these notions, and in doing so, we observe their close connection with the theory of pseudodense sets by Reingold et al. [1]. We extend the dense model theorem of Reingold et al. to demonstrate equivalence between two definitions (indistinguishability and simulatabilitybased) of computational differential privacy. Our computational analogues of differential privacy seem to allow for more accurate constructions than the standard informationtheoretic analogues. In particular, in the context of private approximation of the distance between two vectors, we present a differentiallyprivate protocol for computing the approximation, and contrast it with a substantially more accurate protocol that is only computationally differentially private.
Regularity, boosting, and efficiently simulating every highentropy distribution
 In Proceedings of the 24th IEEE Conference on Computational Complexity
, 2009
"... We show that every highentropy distribution is indistinguishable from an efficiently samplable distribution of the same entropy. Specifically, we prove that if D is a distribution over {0, 1} n of minentropy at least n − k, then for every S and ɛ there is a circuit C of size at most S · poly(ɛ −1, ..."
Abstract

Cited by 8 (2 self)
 Add to MetaCart
We show that every highentropy distribution is indistinguishable from an efficiently samplable distribution of the same entropy. Specifically, we prove that if D is a distribution over {0, 1} n of minentropy at least n − k, then for every S and ɛ there is a circuit C of size at most S · poly(ɛ −1, 2 k) that samples a distribution of entropy at least n − k that is ɛindistinguishable from D by circuits of size S. Stated in a more abstract form (where we refer to indistinguishability by arbitrary families of distinguishers rather than boundedsize circuits), our result implies (a) the Weak Szemerédi Regularity Lemma of Frieze and Kannan (b) a constructive version of the Dense Model Theorem of Green, Tao and Ziegler with better quantitative parameters (polynomial rather than exponential in the distinguishing probability ɛ), and (c) the Impagliazzo Hardcore Set Lemma. It appears to be the general result underlying the known connections between “regularity ” results in graph theory, “decomposition ” results in additive combinatorics, and the Hardcore Lemma in complexity theory. We present two proofs of our result, one in the spirit of Nisan’s proof of the Hardcore Lemma via duality of linear programming, and one similar to Impagliazzo’s “boosting ” proof. A third proof by iterative partitioning, which gives the complexity of the sampler to be exponential in 1/ɛ and 2 k, is also implicit in the GreenTaoZiegler proofs of the Dense Model Theorem.
New results on noncommutative and commutative polynomial identity testing
 In IEEE Conference on Computational Complexity
, 2008
"... Using ideas from automata theory we design a new efficient (deterministic) identity test for the noncommutative polynomial identity testing problem (first introduced and studied in [RS05, BW05]). More precisely, given as input a noncommutative circuit C(x1, · · · , xn) computing a polynomial in F ..."
Abstract

Cited by 5 (1 self)
 Add to MetaCart
Using ideas from automata theory we design a new efficient (deterministic) identity test for the noncommutative polynomial identity testing problem (first introduced and studied in [RS05, BW05]). More precisely, given as input a noncommutative circuit C(x1, · · · , xn) computing a polynomial in F{x1, · · · , xn} of degree d with at most t monomials, where the variables xi are noncommuting, we give a deterministic polynomial identity test that checks if C ≡ 0 and runs in time polynomial in d, n, C, and t. The same methods works in a blackbox setting: Given a noncommuting blackbox polynomial f ∈ F{x1, · · · , xn} of degree d with t monomials we can, in fact, reconstruct the entire polynomial f in time polynomial in n, d and t. Indeed, we apply this idea to the reconstruction of blackbox noncommuting algebraic branching programs (the ABPs considered by Nisan in [N91] and RazShpilka in [RS05]). Assuming that the blackbox model allows us to query the ABP for the output at any given gate then we can reconstruct an (equivalent) ABP in deterministic polynomial time. Finally, we turn to commutative identity testing and explore the complexity of the problem when the coefficients of the input polynomial come from an arbitrary finite commutative ring with unity whose elements are uniformly encoded as strings and the ring operations are given by an oracle. We show that several algorithmic results for polynomial identity testing over fields also hold when the coefficients come from such finite rings. 1
DECOMPOSITIONS, APPROXIMATE STRUCTURE, TRANSFERENCE, AND THE HAHNBANACH THEOREM
, 2008
"... We discuss three major classes of theorems in additive and extremal combinatorics: decomposition theorems, approximate structure theorems, and transference principles. We also show how the finitedimensional HahnBanach theorem can be used to give short and transparent proofs of many results of the ..."
Abstract

Cited by 5 (1 self)
 Add to MetaCart
We discuss three major classes of theorems in additive and extremal combinatorics: decomposition theorems, approximate structure theorems, and transference principles. We also show how the finitedimensional HahnBanach theorem can be used to give short and transparent proofs of many results of these kinds. Amongst the applications of this method is a much shorter proof of one of the major steps in the proof of Green and Tao that the primes contain arbitrarily long arithmetic progressions. In order to explain the role of this step, we include a brief description of the rest of their argument. A similar proof has been discovered independently by Reingold, Trevisan, Tulsiani and Vadhan [RTTV].
A HARDY FIELD EXTENSION OF SZEMERÉDI’S THEOREM
, 2008
"... In 1975 Szemerédi proved that a set of integers of positive upper density contains arbitrarily long arithmetic progressions. Bergelson and Leibman showed in 1996 that the common difference of the arithmetic progression can be a square, a cube, or more generally of the form p(n) where p(n) is any i ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
In 1975 Szemerédi proved that a set of integers of positive upper density contains arbitrarily long arithmetic progressions. Bergelson and Leibman showed in 1996 that the common difference of the arithmetic progression can be a square, a cube, or more generally of the form p(n) where p(n) is any integer polynomial with zero constant term. We produce a variety of new results of this type related to sequences that are not polynomial. We show that the common difference can be of the form [n δ] where δ is any positive real number and [x] denotes the integer part of x. More generally, the common difference can be of the form [a(n)] where a(x) is any function from a Hardy field which is sandwiched between two consecutive powers of x, that is, a(x)/x k → ∞ and a(x)/x k+1 → 0 for some nonnegative integer k. This allows us for example to deal with functions that can be constructed by a finite combination of the ordinary arithmetical symbols, the real constants, the real variable x, and the functional symbols exp and log, and satisfy the previous growth assumptions. The proof combines a new structural result for Hardy sequences, techniques from ergodic theory, and some recent equidistribution results of sequences on nilmanifolds.
Open problems in additive combinatorics
"... A brief historical introduction to the subject of additive combinatorics and a list of challenging open problems, most of which are contributed by the leading experts in the area, are presented. ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
A brief historical introduction to the subject of additive combinatorics and a list of challenging open problems, most of which are contributed by the leading experts in the area, are presented.