Results 1 
4 of
4
A bound on the precision required to estimate a boolean perceptron from its average satisfying assignment
 SIAM JOURNAL ON DISCRETE MATHEMATICS
, 2006
"... A boolean perceptron is a linear threshold function over the discrete boolean domain f0; 1g n That is, it maps any binary vector to 0 or 1 depending on whether the vector's components satisfy some linear inequality. In 1961, Chow showed that any boolean perceptron is determined by the average or &q ..."
Abstract

Cited by 9 (0 self)
 Add to MetaCart
A boolean perceptron is a linear threshold function over the discrete boolean domain f0; 1g n That is, it maps any binary vector to 0 or 1 depending on whether the vector's components satisfy some linear inequality. In 1961, Chow showed that any boolean perceptron is determined by the average or "center of gravity " of its "true " vectors (those that are mapped to 1), together with the total number of true vectors. Moreover, these quantities distinguish the function from any other boolean function, not just other boolean perceptrons. In this paper we go further, by identifying a lower bound on the Euclidean distance between the average satisfying assignment of a boolean perceptron, and the average satisfying assignment of a boolean function that disagrees with that boolean perceptron on a fraction ffl of the input vectors. The distance between the two means is shown to be at least (ffl=n) O(log(n=ffl) log(1=ffl)) This is motivated by the statistical question of whether an empirical estimate of this average allows us to recover a good approximation to the perceptron. Our result provides a mildly superpolynomial upper bound on the growth rate of the sample size required to learn boolean perceptrons in the "restricted focus of attention " setting. In the process we also find some interesting geometrical properties of the vertices of the unit hypercube.
Learning from Aggregate Views
"... In this paper, we introduce a new class of data mining problems called learning from aggregate views. In contrast to the traditional problem of learning from a single table of training examples, the new goal is to learn from multiple aggregate views of the underlying data, without access to the una ..."
Abstract

Cited by 6 (1 self)
 Add to MetaCart
In this paper, we introduce a new class of data mining problems called learning from aggregate views. In contrast to the traditional problem of learning from a single table of training examples, the new goal is to learn from multiple aggregate views of the underlying data, without access to the unaggregated data. We motivate this new problem, present a general problem framework, develop learning methods for RFA (RestrictionFree Aggregate) views defined using COUNT, SUM, AVG and STDEV, and offer theoretical and experimental results that characterize the proposed methods. 1.
Estimating a Boolean Perceptron from its Average Satisfying Assignment: A Bound on the Precision Required
 In Proceedings of the Fourteenth Annual Conference on Computational Learning Theory
, 2001
"... . A boolean perceptron is a linear threshold function over the discrete boolean domain f0; 1g n . That is, it maps any binary vector to 0 or 1 depending on whether the vector's components satisfy some linear inequality. In 1961, Chow [9] showed that any boolean perceptron is determined by the ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
. A boolean perceptron is a linear threshold function over the discrete boolean domain f0; 1g n . That is, it maps any binary vector to 0 or 1 depending on whether the vector's components satisfy some linear inequality. In 1961, Chow [9] showed that any boolean perceptron is determined by the average or \center of gravity" of its \true" vectors (those that are mapped to 1). Moreover, this average distinguishes the function from any other boolean function, not just other boolean perceptrons. We address an associated statistical question of whether an empirical estimate of this average is likely to provide a good approximation to the perceptron. In this paper we show that an estimate that is accurate to within additive error (=n) O(log(1=)) determines a boolean perceptron that is accurate to within error (the fraction of misclassied vectors). This provides a mildly superpolynomial bound on the sample complexity of learning boolean perceptrons in the \restricted focus of attention" setting. In the process we also nd some interesting geometrical properties of the vertices of the unit hypercube. 1
Learning Fixeddimension Linear Thresholds From Fragmented Data
 in Procs of the 1999 Conference on Computational Learning Theory
, 1999
"... We investigate PAClearning in a situation in which examples (consisting of an input vector and 0/1 label) have some of the components of the input vector concealed from the learner. This is a special case of Restricted Focus of Attention (RFA) learning. Our interest here is in 1RFA learning, where ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
We investigate PAClearning in a situation in which examples (consisting of an input vector and 0/1 label) have some of the components of the input vector concealed from the learner. This is a special case of Restricted Focus of Attention (RFA) learning. Our interest here is in 1RFA learning, where only a single component of an input vector is given, for each example. We argue that 1RFA learning merits special consideration within the wider eld of RFA learning. It is the most restrictive form of RFA learning (so that positive results apply in general), and it models a typical \data fusion" scenario, where we have sets of observations from a number of separate sensors, but these sensors are uncorrelated sources. Within this setting we study the wellknown class of linear threshold functions, the characteristic functions of Euclidean halfspaces. The sample complexity (i.e. samplesize requirement as a function of the parameters) of this learning problem is aected by the input distri...