## Every linear threshold function has a low-weight approximator (2006)

### Cached

### Download Links

- [www.cs.columbia.edu]
- [www1.cs.columbia.edu]
- DBLP

### Other Repositories/Bibliography

Venue: | In Proceedings of the 21st Conference on Computational Complexity (CCC |

Citations: | 20 - 7 self |

### BibTeX

@INPROCEEDINGS{Servedio06everylinear,

author = {Rocco A. Servedio},

title = {Every linear threshold function has a low-weight approximator},

booktitle = {In Proceedings of the 21st Conference on Computational Complexity (CCC},

year = {2006},

pages = {18--30}

}

### Years of Citing Articles

### OpenURL

### Abstract

Given any linear threshold function f on n Boolean variables, we construct a linear threshold function g which disagrees with f on at most an ɛ fraction of inputs and has integer weights each of magnitude at most √ n · 2 Õ(1/ɛ2). We show that the construction is optimal in terms of its dependence on n by proving a lower bound of Ω ( √ n) on the weights required to approximate a particular linear threshold function. We give two applications. The first is a deterministic algorithm for approximately counting the fraction of satisfying assignments to an instance of the zero-one knapsack problem to within an additive ±ɛ. The algorithm runs in time polynomial in n (but exponential in 1/ɛ 2). In our second application, we show that any linear threshold function f is specified to within error ɛ by estimates of its Chow parameters (degree 0 and 1 Fourier coefficients) which are accurate to within an additive ±1/(n · 2 Õ(1/ɛ2)). This is the first such accuracy bound which is inverse polynomial in n (previous work of Goldberg [12] gave a 1/quasipoly(n) bound), and gives the first polynomial bound (in terms of n) on the number of examples required for learning linear threshold functions in the “restricted focus of attention ” framework.