Results 1  10
of
40
The Influence of Variables on Boolean Functions (Extended Abstract)
, 1988
"... Introduction This paper applies methods from harmonic analysis to prove some general theorems on boolean functions. The result that is easiest to describe says that "Boolean functions always have small dominant sets of variables." The exact definitions will be given shortly, but let us be ..."
Abstract

Cited by 229 (20 self)
 Add to MetaCart
Introduction This paper applies methods from harmonic analysis to prove some general theorems on boolean functions. The result that is easiest to describe says that "Boolean functions always have small dominant sets of variables." The exact definitions will be given shortly, but let us be more specific: Let f be an n\Gammavariable boolean function taking the value zero for half of the 2 n variable assignments. Then there is a set of o(n) variables such that almost surely the value of f is undetermined as long as these variables are not assigned values. This proves some of the conjectures made in [BL]. These new connections with harmonic analysis are very promising. Besides the results on boolean functions they enable us to prove new theorems on the rapid mixing of the random walk on the cube, as well as new theorems in the extremal theory of finite sets. We begin by reviewing some definitions from [BL].
Unbiased Bits from Sources of Weak Randomness and Probabilistic Communication Complexity
, 1988
"... , Introduction and References only) Benny Chor Oded Goldreich MIT \Gamma Laboratory for Computer Science Cambridge, Massachusetts 02139 ABSTRACT \Gamma A new model for weak random physical sources is presented. The new model strictly generalizes previous models (e.g. the Santha and Vazirani model [2 ..."
Abstract

Cited by 187 (5 self)
 Add to MetaCart
, Introduction and References only) Benny Chor Oded Goldreich MIT \Gamma Laboratory for Computer Science Cambridge, Massachusetts 02139 ABSTRACT \Gamma A new model for weak random physical sources is presented. The new model strictly generalizes previous models (e.g. the Santha and Vazirani model [24]). The sources considered output strings according to probability distributions in which no single string is too probable. The new model provides a fruitful viewpoint on problems studied previously as: ffl Extracting almost perfect bits from sources of weak randomness: the question of possibility as well as the question of efficiency of such extraction schemes are addressed. ffl Probabilistic Communication Complexity: it is shown that most functions have linear communication complexity in a very strong probabilistic sense. ffl Robustness of BPP with respect to sources of weak randomness (generalizing a result of Vazirani and Vazirani [27]). The paper has appeared in SIAM Journal o...
On the complexity of the parity argument and other inefficient proofs of existence
 JCSS
, 1994
"... We define several new complexity classes of search problems, "between " the classes FP and FNP. These new classes are contained, along with factoring, and the class PLS, in the class TFNP of search problems in FNP that always have a witness. A problem in each of these new classes is define ..."
Abstract

Cited by 158 (8 self)
 Add to MetaCart
We define several new complexity classes of search problems, "between " the classes FP and FNP. These new classes are contained, along with factoring, and the class PLS, in the class TFNP of search problems in FNP that always have a witness. A problem in each of these new classes is defined in terms of an implicitly given, exponentially large graph. The existence of the solution sought is established via a simple graphtheoretic argument with an inefficiently constructive proof; for example, PLS can be thought of as corresponding to the lemma "every dag has a sink. " The new classes are based on lemmata such as "every graph has an even number of odddegree nodes. " They contain several important problems for which no polynomial time algorithm is presently known, including the computational versions of Sperner's lemma, Brouwer's fixpoint theorem, Chfvalley's theorem, and the BorsukUlam theorem, the linear complementarity problem for Pmatrices, finding a mixed equilibrium in a nonzero sum game, finding a second Hamilton circuit in a Hamiltonian cubic graph, a second Hamiltonian decomposition in a quartic graph, and others. Some of these problems are shown to be complete. © 1994 Academic Press, Inc. 1.
On The Power Of TwoPoints Based Sampling
 Journal of Complexity
, 1989
"... The purpose of this note is to present a new sampling technique and to demonstrate some of its properties. The new technique consists of picking two elements at random, and deterministically generating (from them) a long sequence of pairwise independent elements. The sequence is guarantees to inters ..."
Abstract

Cited by 93 (17 self)
 Add to MetaCart
The purpose of this note is to present a new sampling technique and to demonstrate some of its properties. The new technique consists of picking two elements at random, and deterministically generating (from them) a long sequence of pairwise independent elements. The sequence is guarantees to intersect, with high probability, any set of nonnegligible density. 1. Introduction In recent years the role of randomness in computation has become more and more dominant. Randomness was used to speed up sequential computations (e.g. primality testing, testing polynomial identities etc.), but its effect on parallel and distributed computation is even more impressive. In either cases the solutions are typically presented such that they are guarateed to produce the desired result with some nonnegligible probability. It is implicitly suggested that if a higher degree of confidence is required the algorithm should be run several times, each time using different coin tosses. Since the coin tosses f...
Monotone Circuits for Matching Require Linear Depth
"... We prove that monotone circuits computing the perfect matching function on nvertex graphs require\Omega\Gamma n) depth. This implies an exponential gap between the depth of monotone and nonmonotone circuits. ..."
Abstract

Cited by 76 (8 self)
 Add to MetaCart
We prove that monotone circuits computing the perfect matching function on nvertex graphs require\Omega\Gamma n) depth. This implies an exponential gap between the depth of monotone and nonmonotone circuits.
PseudoRandom Graphs
 IN: MORE SETS, GRAPHS AND NUMBERS, BOLYAI SOCIETY MATHEMATICAL STUDIES 15
"... ..."
Approximate InclusionExclusion
 Combinatorica
, 1993
"... The InclusionExclusion formula expresses the size of a union of a family of sets in terms of the sizes of intersections of all subfamilies. This paper considers approximating the size of the union when intersection sizes are known for only some of the subfamilies, or when these quantities are giv ..."
Abstract

Cited by 41 (4 self)
 Add to MetaCart
The InclusionExclusion formula expresses the size of a union of a family of sets in terms of the sizes of intersections of all subfamilies. This paper considers approximating the size of the union when intersection sizes are known for only some of the subfamilies, or when these quantities are given to within some error, or both.
Constructing Small Sample Spaces Satisfying Given Constraints
 SIAM JOURNAL ON DISCRETE MATHEMATICS
, 1993
"... The subject of this paper is finding small sample spaces for joint distributions of n discrete random variables. Such distributions are often only required to obey a certain limited set of constraints of the form Pr(E)=. We show that the problem of deciding whether there exists any distribution sati ..."
Abstract

Cited by 31 (3 self)
 Add to MetaCart
The subject of this paper is finding small sample spaces for joint distributions of n discrete random variables. Such distributions are often only required to obey a certain limited set of constraints of the form Pr(E)=. We show that the problem of deciding whether there exists any distribution satisfying a given set of constraints is NPhard. However, if the constraints are consistent, then there exists a distribution satisfying them which is supported by a "small" sample space (one whose cardinalityis equal to the number of constraints). For the important case of independenceconstraints,where the constraints have a certain form and are consistent with a joint distribution of n independent random variables, a small sample space can be constructed in polynomial time. This last result is also useful for derandomizing algorithms. We demonstrate this technique by an application to the problem of finding large independentsetsin sparse hypergraphs.
Nearly optimal distributed edge colouring in O(log log n) rounds
 in Proceedings of the Eight Annual ACMSIAM Symposium on Discrete Algorithms (SODA 97
, 1996
"... An extremely simple distributed randomized algorithm is presented which with high probability properly edge colours a given graph using (1+ ")\Delta colours, where \Delta is the maximum degree of the graph and " is any given positive constant. The algorithm is very fast. In particular, for ..."
Abstract

Cited by 28 (7 self)
 Add to MetaCart
An extremely simple distributed randomized algorithm is presented which with high probability properly edge colours a given graph using (1+ ")\Delta colours, where \Delta is the maximum degree of the graph and " is any given positive constant. The algorithm is very fast. In particular, for graphs with sufficiently large vertex degrees (larger than polylog n, but smaller than any positive power of n), the algorithm requires only O(log log n) communication rounds. The algorithm is inherently distributed, but can be implemented on the PRAM, where it requires O(m\Delta) processors and O(log \Delta log log n) time, or in a sequential setting, where it requires O(m\Delta) time. 1 Introduction The edge colouring problem is a much studied problem in the theory of algorithms, graph theory, and combinatorics, whose relevance to computer science stems from its applications to scheduling and resource allocation problems [6, 11, 14, 17, 19, 12, 24, among others]. Given an input graph, the problem ...
On the size of Kakeya sets in finite fields
 J. AMS
, 2008
"... Abstract. A Kakeya set is a subset of � n, where � is a finite field of q elements, that contains a line in every direction. In this paper we show that the size of every Kakeya set is at least Cn · q n, where Cn depends only on n. This answers a question of Wolff [Wol99]. 1. ..."
Abstract

Cited by 25 (4 self)
 Add to MetaCart
Abstract. A Kakeya set is a subset of � n, where � is a finite field of q elements, that contains a line in every direction. In this paper we show that the size of every Kakeya set is at least Cn · q n, where Cn depends only on n. This answers a question of Wolff [Wol99]. 1.