Results 1 
8 of
8
Efficient Distributionfree Learning of Probabilistic Concepts
 Journal of Computer and System Sciences
, 1993
"... In this paper we investigate a new formal model of machine learning in which the concept (boolean function) to be learned may exhibit uncertain or probabilistic behaviorthus, the same input may sometimes be classified as a positive example and sometimes as a negative example. Such probabilistic c ..."
Abstract

Cited by 199 (8 self)
 Add to MetaCart
In this paper we investigate a new formal model of machine learning in which the concept (boolean function) to be learned may exhibit uncertain or probabilistic behaviorthus, the same input may sometimes be classified as a positive example and sometimes as a negative example. Such probabilistic concepts (or pconcepts) may arise in situations such as weather prediction, where the measured variables and their accuracy are insufficient to determine the outcome with certainty. We adopt from the Valiant model of learning [27] the demands that learning algorithms be efficient and general in the sense that they perform well for a wide class of pconcepts and for any distribution over the domain. In addition to giving many efficient algorithms for learning natural classes of pconcepts, we study and develop in detail an underlying theory of learning pconcepts. 1 Introduction Consider the following scenarios: A meteorologist is attempting to predict tomorrow's weather as accurately as pos...
Learning Decision Trees using the Fourier Spectrum
, 1991
"... This work gives a polynomial time algorithm for learning decision trees with respect to the uniform distribution. (This algorithm uses membership queries.) The decision tree model that is considered is an extension of the traditional boolean decision tree model that allows linear operations in each ..."
Abstract

Cited by 189 (10 self)
 Add to MetaCart
This work gives a polynomial time algorithm for learning decision trees with respect to the uniform distribution. (This algorithm uses membership queries.) The decision tree model that is considered is an extension of the traditional boolean decision tree model that allows linear operations in each node (i.e., summation of a subset of the input variables over GF (2)). This paper shows how to learn in polynomial time any function that can be approximated (in norm L 2 ) by a polynomially sparse function (i.e., a function with only polynomially many nonzero Fourier coefficients). The authors demonstrate that any function f whose L 1 norm (i.e., the sum of absolute value of the Fourier coefficients) is polynomial can be approximated by a polynomially sparse function, and prove that boolean decision trees with linear operations are a subset of this class of functions. Moreover, it is shown that the functions with polynomial L 1 norm can be learned deterministically. The algorithm can a...
On the Fourier Spectrum of Monotone Functions
, 1996
"... In this paper, monotone Boolean functions are studied using harmonic analysis on the cube. ..."
Abstract

Cited by 51 (2 self)
 Add to MetaCart
In this paper, monotone Boolean functions are studied using harmonic analysis on the cube.
An O(n^(log log n)) Learning Algorithm for DNF under the Uniform Distribution
 Journal of Computer and System Sciences
, 1998
"... We show that a DNF with terms of size at most d can be approximated by a function with at most d O(d log 1=") non zero Fourier coefficients such that the expected error squared, with respect to the uniform distribution, is at most ". This property is used to derive a learning algorithm for ..."
Abstract

Cited by 48 (1 self)
 Add to MetaCart
We show that a DNF with terms of size at most d can be approximated by a function with at most d O(d log 1=") non zero Fourier coefficients such that the expected error squared, with respect to the uniform distribution, is at most ". This property is used to derive a learning algorithm for DNF, under the uniform distribution. The learning algorithm uses queries and learns, with respect to the uniform distribution, a DNF with terms of size at most d in time polynomial in n and d O(d log 1=") . The interesting implications are for the case when " is constant. In this case our algorithm learns a DNF with a polynomial number of terms in time n O(log log n) , and a DNF with terms of size at most O(log n= log log n) in polynomial time.
Faithful Representations and Moments of Satisfaction: Probabilistic Methods in Learning and Logic
, 1998
"... ..."
Learning Boolean Functions
 Theoretical Advances in Neural Computation and Learning
, 1994
"... We survey learning algorithms that are based on the Fourier Transform representation. In many cases we simplify the original proofs and integrate the proofs of related results. We hope that this would give the reader a complete and comprehensive understanding of both the results and the techniques. ..."
Abstract
 Add to MetaCart
We survey learning algorithms that are based on the Fourier Transform representation. In many cases we simplify the original proofs and integrate the proofs of related results. We hope that this would give the reader a complete and comprehensive understanding of both the results and the techniques.
Learning Boolean Functions
 Theoretical Advances in Neural Computation and Learning
, 1994
"... We survey learning algorithms that are based on the Fourier Transform representation. In many cases we simplify the original proofs and integrate the proofs of related results. We hope that this would give the reader a complete and comprehensive understanding of both the results and the techniques. ..."
Abstract
 Add to MetaCart
We survey learning algorithms that are based on the Fourier Transform representation. In many cases we simplify the original proofs and integrate the proofs of related results. We hope that this would give the reader a complete and comprehensive understanding of both the results and the techniques. 1 Introduction The importance of using the "right" representation of a function in order to "approximate" it has been widely recognized. The Fourier Transform representation of a function is a classic representation which is widely used to approximate real functions (i.e. functions whose inputs are real numbers). However, the Fourier Transform representation for functions whose inputs are boolean has been far less studied. On the other hand it seems that the Fourier Transform representation can be used to learn many classes of boolean functions. At this point it would be worthwhile to say a few words about the Fourier Transform of functions whose inputs are boolean. The basis functions are ...
A Faster Algorithm for Learning Decision Trees With Module Nodes
"... One of the challenging problems regarding learning decision trees is whether decision trees over the domain Z~, in which each internal node is labeled by a module function axi(mod N) for some i E (1....,n), are efficiently learnable with equivalence and membership queries [5]. Given any decision tr ..."
Abstract
 Add to MetaCart
One of the challenging problems regarding learning decision trees is whether decision trees over the domain Z~, in which each internal node is labeled by a module function axi(mod N) for some i E (1....,n), are efficiently learnable with equivalence and membership queries [5]. Given any decision tree T, let s be the number of its leaf nodes. Let N = p~l...p ~ be its prime decomposition, and let 7(N) = (tl 1)... (t ~ + 1). We show that when all the module functions in T have the same coefficient, then there is a polynomial time algorithm for learning T using at most s(s + 1)7(N) equivalence queries and at most s2nT(N) membership queries. Our algorithm is substantially more efficient than the algorithm designed in [7] for learning such decision trees. When N is a prime, our algorithm implies the bestknown algorithm in [4] for learning decision trees over a finite alphabet. 1