Results 1  10
of
25
A.: ChernoffHoeffding bounds for applications with limited independence
 SIAM J. Discret. Math
, 1995
"... ..."
(Show Context)
Approximating HyperRectangles: Learning and Pseudorandom Sets
 Journal of Computer and System Sciences
, 1997
"... The PAC learning of rectangles has been studied because they have been found experimentally to yield excellent hypotheses for several applied learning problems. Also, pseudorandom sets for rectangles have been actively studied recently because (i) they are a subproblem common to the derandomization ..."
Abstract

Cited by 44 (3 self)
 Add to MetaCart
(Show Context)
The PAC learning of rectangles has been studied because they have been found experimentally to yield excellent hypotheses for several applied learning problems. Also, pseudorandom sets for rectangles have been actively studied recently because (i) they are a subproblem common to the derandomization of depth2 (DNF) circuits and derandomizing Randomized Logspace, and (ii) they approximate the distribution of n independent multivalued random variables. We present improved upper bounds for a class of such problems of "approximating" highdimensional rectangles that arise in PAC learning and pseudorandomness. Key words and phrases. Rectangles, machine learning, PAC learning, derandomization, pseudorandomness, multipleinstance learning, explicit constructions, Ramsey graphs, random graphs, sample complexity, approximations of distributions. 2 1 Introduction A basic common theme of a large part of PAC learning and derandomization/computational pseudorandomness is to "approximate" a stru...
Efficient Approximation of Product Distributions
 in Proceedings of the 24th Annual ACM Symposium on Theory of Computing
, 1998
"... We describe efficient constructions of small probability spaces that approximate the joint distribution of general random variables. Previous work on efficient constructions concentrate on approximations of the joint distribution for the special case of identical, uniformly distributed random var ..."
Abstract

Cited by 23 (2 self)
 Add to MetaCart
We describe efficient constructions of small probability spaces that approximate the joint distribution of general random variables. Previous work on efficient constructions concentrate on approximations of the joint distribution for the special case of identical, uniformly distributed random variables. Preliminary version has appeared in the Proceedings of the 24th ACM Symp. on Theory of Computing (STOC), pages 1016, 1992. y Dept. of Electrical EngineeringSystems, TelAviv University, RamatAviv, TelAviv 69978, Israel. Email: guy@eng.tau.ac.il. z Department of Computer Science and Applied Mathematics, Weizmann Institute of Science, Rehovot, Israel. Email: oded@wisdom.weizmann.ac.il. Research partially supported by grant No. 8900312 from the United StatesIsrael Binational Science Foundation (BSF), Jerusalem, Israel. x International Computer Science Institute, Berkeley, CA 94704, USA. Email: luby@icsi.berkeley.edu. Research supported in part by National Science Founda...
Improved Pseudorandom Generators For Combinatorial Rectangles
, 1998
"... this paper will have base 2. 2.2. kwise Independent Hash Function Family Let n 1 ; n 2 be integers. Recall the correspondence between functions and vectors. A family H of functions from [n 1 ] to [n 2 ] is called a (k; ") independent hash function family if for any I ` [n 1 ] with jIj k and ..."
Abstract

Cited by 12 (0 self)
 Add to MetaCart
this paper will have base 2. 2.2. kwise Independent Hash Function Family Let n 1 ; n 2 be integers. Recall the correspondence between functions and vectors. A family H of functions from [n 1 ] to [n 2 ] is called a (k; ") independent hash function family if for any I ` [n 1 ] with jIj k and for any v2 [n 2 ] fi fi fi P h2H [h I = v] \Gamma n 2 fi fi fi ": A (k;0) independent hash function family is called a kwise independent hash function family
Algorithms for SAT and Upper Bounds on Their Complexity
, 2001
"... We survey recent algorithms for the propositional satisfiability problem, in particular algorithms that have the best current worstcase upper bounds on their complexity. We also discuss some related issues: the derandomization of the algorithm of Paturi, Pudlák, Saks and Zane, the ValiantVazirani ..."
Abstract

Cited by 9 (2 self)
 Add to MetaCart
We survey recent algorithms for the propositional satisfiability problem, in particular algorithms that have the best current worstcase upper bounds on their complexity. We also discuss some related issues: the derandomization of the algorithm of Paturi, Pudlák, Saks and Zane, the ValiantVazirani Lemma, and random walk algorithms with the "back button".
Minimizing Randomness in Minimum Spanning Tree, Parallel Connectivity, and Set Maxima Algorithms
 In Proc. 13th Annual ACMSIAM Symposium on Discrete Algorithms (SODA'02
, 2001
"... There are several fundamental problems whose deterministic complexity remains unresolved, but for which there exist randomized algorithms whose complexity is equal to known lower bounds. Among such problems are the minimum spanning tree problem, the set maxima problem, the problem of computing conne ..."
Abstract

Cited by 8 (4 self)
 Add to MetaCart
There are several fundamental problems whose deterministic complexity remains unresolved, but for which there exist randomized algorithms whose complexity is equal to known lower bounds. Among such problems are the minimum spanning tree problem, the set maxima problem, the problem of computing connected components and (minimum) spanning trees in parallel, and the problem of performing sensitivity analysis on shortest path trees and minimum spanning trees. However, while each of these problems has a randomized algorithm whose performance meets a known lower bound, all of these randomized algorithms use a number of random bits which is linear in the number of operations they perform. We address the issue of reducing the number of random bits used in these randomized algorithms. For each of the problems listed above, we present randomized algorithms that have optimal performance but use only a polylogarithmic number of random bits; for some of the problems our optimal algorithms use only log n random bits. Our results represent an exponential savings in the amount of randomness used to achieve the same optimal performance as in the earlier algorithms. Our techniques are general and could likely be applied to other problems.
Bounds and constructions for the stardiscrepancy via δcovers
 J. Complexity
"... For numerical integration in higher dimensions, bounds for the stardiscrepancy with polynomial dependence on the dimension d are desirable. Furthermore, it is still a great challenge to give construction methods for lowdiscrepancy point sets. In this paper we give upper bounds for the stardiscr ..."
Abstract

Cited by 7 (6 self)
 Add to MetaCart
(Show Context)
For numerical integration in higher dimensions, bounds for the stardiscrepancy with polynomial dependence on the dimension d are desirable. Furthermore, it is still a great challenge to give construction methods for lowdiscrepancy point sets. In this paper we give upper bounds for the stardiscrepancy and its inverse for subsets of the ddimensional unit cube. They improve known results. In particular, we determine the usually only implicitly given constants. The bounds are based on the construction of nearly optimal δcovers of anchored boxes in the ddimensional unit cube. We give an explicit construction of lowdiscrepancy points with a derandomized algorithm. The running time of the algorithm, which is exponentially in d, is discussed in detail and comparisons with other methods are given.
Deterministic Algorithms for the Lovasz Local Lemma
, 2010
"... The Lovász Local Lemma [5] (LLL) is a powerful result in probability theory that states that the probability that none of a set of bad events happens is nonzero if the probability of each event is small compared to the number of events that depend on it. It is often used in combination with the prob ..."
Abstract

Cited by 6 (3 self)
 Add to MetaCart
(Show Context)
The Lovász Local Lemma [5] (LLL) is a powerful result in probability theory that states that the probability that none of a set of bad events happens is nonzero if the probability of each event is small compared to the number of events that depend on it. It is often used in combination with the probabilistic method for nonconstructive existence proofs. A prominent application is to kCNF formulas, where LLL implies that if every clause in a formula shares variables with at most d ≤ 2 k /e other clauses then such a formula has a satisfying assignment. Recently, a randomized algorithm to efficiently construct a satisfying assignment was given by Moser [14]. Subsequently Moser and Tardos [15] gave a randomized algorithm to construct the structures guaranteed by the LLL in a general algorithmic framework. We address the main problem left open by Moser and Tardos of derandomizing these algorithms efficiently. Specifically, for a kCNF formula with m clauses and d ≤ 2 k/(1+ɛ) /e for some ɛ ∈ (0, 1), we give an algorithm that finds a satisfying assignment in time Õ(m2(1+1/ɛ)). This improves upon the deterministic algorithms of Moser and of MoserTardos with running times m Ω(k2) and m Ω(k·1/ɛ) which are superpolynomial for k = ω(1) and upon other previous algorithms which work only for d ≤ 2 k/16 /e. Our algorithm works efficiently for the asymmetric version of LLL under the algorithmic framework of Moser and Tardos [15] and is also parallelizable, i.e., has polylogarithmic running time using polynomially many processors.
On kwise independent distributions and Boolean functions
"... We pursue a systematic study of the following problem. Let f: {0, 1} n → {0, 1} be a (usually monotone) boolean function whose behaviour is well understood when the input bits are identically independently distributed. What can be said about the behaviour of the function when the input bits are not ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
(Show Context)
We pursue a systematic study of the following problem. Let f: {0, 1} n → {0, 1} be a (usually monotone) boolean function whose behaviour is well understood when the input bits are identically independently distributed. What can be said about the behaviour of the function when the input bits are not completely independent, but only kwise independent, i.e. every subset of k bits is independent? more precisely, how high should k be so that any kwise independent distribution ”fools ” the function, i.e. causes it to behave nearly the same as when the bits are completely independent? In this paper, we are mainly interested in asymptotic results about monotone functions which exhibit sharp thresholds, i.e. there is a critical probability, pc, such that P (f = 1) under the completely independent distribution with marginal p, makes a sharp transition, from being close to 0 to being close to 1, in the vicinity of pc. For such (sequences of) functions we define 2 notions of ”fooling”: K1 is the independence needed in order to force the existence of the sharp threshold (which must then be at pc). K2 is the independence needed to ”fool ” the function at pc. In order to answer these questions, we explore the extremal properties of kwise independent distributions and provide ways of constructing such distributions. These constructions are connected to linear error correcting codes. We also utilize duality theory and show that for the function f to behave (almost) the same under all kwise independent inputs is equivalent to the function f being well approximated by a real polynomial in a certain fashion. This type of approximation is stronger than approximation in L1. We analyze several well known boolean functions (including AND, Majority, Tribes and Percolation among others), some of which turn out to have surprising properties with respect to these questions. In some of our results we use tools from the theory of the classical moment problem, seemingly for the first time in this subject, to shed light on these questions.
Solving some discrepancy problems in NC
, 1997
"... We show that several discrepancylike problems can be solved in NC 2 nearly achieving the corresponding sequential bounds. For example, given a set system (X; S), where X is a ground set and S ` 2 X , a set R ` X can be computed in NC 2 so that, for each S 2 S, the discrepancy jjR " Sj \ ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
We show that several discrepancylike problems can be solved in NC 2 nearly achieving the corresponding sequential bounds. For example, given a set system (X; S), where X is a ground set and S ` 2 X , a set R ` X can be computed in NC 2 so that, for each S 2 S, the discrepancy jjR " Sj \Gamma jR " Sjj is O( p jSj log jSj). Previous NC algorithms could only achieve O( p jSj 1+ffl log jSj), while ours matches the probabilistic bound achieved sequentially by the method of conditional probabilities within a multiplicative factor 1 + o(1). Other problems whose NC solution we improve are lattice approximation, fflapproximations of range spaces of bounded VCexponent, sampling in geometric configuration spaces, and approximation of integer linear programs. 1 Introduction Problem and previous work. Discrepancy is an important concept in combinatorics, see e.g. [1, 5], and theoretical computer science, see e.g. [27, 23, 9]. It attempts to capture the idea of a good sample from ...