Results 1 
6 of
6
Testing halfspaces
 IN PROC. 20TH ANNUAL SYMPOSIUM ON DISCRETE ALGORITHMS (SODA
, 2009
"... This paper addresses the problem of testing whether a Booleanvalued function f is a halfspace, i.e. a function of the form f(x) = sgn(w ·x−θ). We consider halfspaces over the continuous domain R n (endowed with the standard multivariate Gaussian distribution) as well as halfspaces over the Boolean ..."
Abstract

Cited by 34 (15 self)
 Add to MetaCart
(Show Context)
This paper addresses the problem of testing whether a Booleanvalued function f is a halfspace, i.e. a function of the form f(x) = sgn(w ·x−θ). We consider halfspaces over the continuous domain R n (endowed with the standard multivariate Gaussian distribution) as well as halfspaces over the Boolean cube {−1, 1} n (endowed with the uniform distribution). In both cases we give an algorithm that distinguishes halfspaces from functions that are ǫfar from any halfspace using only poly ( 1) queries, independent of ǫ the dimension n. Two simple structural results about halfspaces are at the heart of our approach for the Gaussian distribution: the first gives an exact relationship between the expected value of a halfspace f and the sum of the squares of f’s degree1 Hermite coefficients, and the second shows that any function that approximately satisfies this relationship is close to a halfspace. We prove analogous results for the Boolean cube {−1, 1} n (with Fourier coefficients in place of Hermite coefficients) for balanced halfspaces in which all degree1 Fourier coefficients are small. Dealing with general halfspaces over {−1, 1} n poses significant additional complications and requires other ingredients. These include “crossconsistency ” versions of the results mentioned above for pairs of halfspaces with the same weights but different thresholds; new structural results relating the largest degree1 Fourier coefficient and the largest weight in unbalanced halfspaces; and algorithmic techniques from recent work on testing juntas [FKR+ 02].
Improved approximation of linear threshold functions
 In Proc. 24nd Annual IEEE Conference on Computational Complexity (CCC
, 2009
"... We prove two main results on how arbitrary linear threshold functions f(x) = sign(w · x − θ) over the ndimensional Boolean hypercube can be approximated by simple threshold functions. Our first result shows that every nvariable threshold function f is ɛclose to a threshold function depending only ..."
Abstract

Cited by 19 (12 self)
 Add to MetaCart
(Show Context)
We prove two main results on how arbitrary linear threshold functions f(x) = sign(w · x − θ) over the ndimensional Boolean hypercube can be approximated by simple threshold functions. Our first result shows that every nvariable threshold function f is ɛclose to a threshold function depending only on Inf(f) 2 · poly(1/ɛ) many variables, where Inf(f) denotes the total influence or average sensitivity of f. This is an exponential sharpening of Friedgut’s wellknown theorem [Fri98], which states that every Boolean function f is ɛclose to a function depending only on 2 O(Inf(f)/ɛ) many variables, for the case of threshold functions. We complement this upper bound by showing that Ω(Inf(f) 2 + 1/ɛ 2) many variables are required for ɛapproximating threshold functions. Our second result is a proof that every nvariable threshold function is ɛclose to a threshold function with integer weights at most poly(n) · 2 Õ(1/ɛ2/3). This is an improvement, in the dependence on the error parameter ɛ, on an earlier result of [Ser07] which gave a poly(n) · 2 Õ(1/ɛ2) bound. Our improvement is obtained via a new proof technique that uses strong anticoncentration bounds from probability theory. The new technique also gives a simple and modular proof of the original [Ser07] result, and extends to give lowweight approximators for threshold functions under a range of probability distributions other than the uniform distribution.
Approximation Algorithms for Stochastic Boolean Function Evaluation and Stochastic Submodular Set Cover
 In Proc. of SODA 2014
, 2014
"... ar ..."
(Show Context)
THE CHOW PARAMETERS PROBLEM
"... function is uniquely determined by its degree0 and degree1 Fourier coefficients. These numbers became known as the Chow Parameters. Providing an algorithmic version of Chow’s Theorem—i.e., efficiently constructing a representation of a threshold function given its Chow Parameters—has remained open ..."
Abstract
 Add to MetaCart
function is uniquely determined by its degree0 and degree1 Fourier coefficients. These numbers became known as the Chow Parameters. Providing an algorithmic version of Chow’s Theorem—i.e., efficiently constructing a representation of a threshold function given its Chow Parameters—has remained open ever since. This problem has received significant study in the fields of circuit complexity, game theory and the design of voting systems, and learning theory. In this paper we effectively solve the problem, giving a randomized PTAS with the following behavior: Given the Chow Parameters of a Boolean threshold function f over n bits and any constant ɛ> 0, the algorithm runs in time O(n 2 log 2 n) and with high probability outputs a representation of a threshold function f ′ which is ɛclose to f. Along the way we prove several new results of independent interest about Boolean threshold functions. In addition to various structural results, these include Õ(n2)time learning algorithms for threshold functions under the uniform distribution in the following models: (i) The Restricted Focus of Attention model, answering an open question of Birkendorf et al. (ii) An agnostictype model. This contrasts with recent results of Guruswami and Raghavendra who show NPhardness for the problem under general distributions. (iii) The PAC model, with constant ɛ. Our Õ(n2)time algorithm substantially improves on the previous best known running time and nearly matches the Ω(n 2) bits of training data that any successful learning algorithm must use. Key words. Chow Parameters, threshold functions, approximation, learning theory
Predicting Application Memory Allocation Behavior
, 2005
"... Computer architecture has always been rapidly changing. New architectures are invented, techniques are refined, and new problems need to be addressed. Memory latency has been one of the biggest hurdles recently in advancing the throughput of computer processors. Processor architecture technologies c ..."
Abstract
 Add to MetaCart
(Show Context)
Computer architecture has always been rapidly changing. New architectures are invented, techniques are refined, and new problems need to be addressed. Memory latency has been one of the biggest hurdles recently in advancing the throughput of computer processors. Processor architecture technologies continue to advance, but the