Results 1  10
of
59
How Many Queries are Needed to Learn?
, 1996
"... We investigate the query complexity of exact learning in the membership and (proper) equivalence query model. We give a complete characterization of concept classes that are learnable with a polynomial number of polynomial sized queries in this model. We give applications of this characterization, i ..."
Abstract

Cited by 69 (9 self)
 Add to MetaCart
We investigate the query complexity of exact learning in the membership and (proper) equivalence query model. We give a complete characterization of concept classes that are learnable with a polynomial number of polynomial sized queries in this model. We give applications of this characterization, including results on learning a natural subclass of DNF formulas, and on learning with membership queries alone. Query complexity has previously been used to prove lower bounds on the time complexity of exact learning. We show a new relationship between query complexity and time complexity in exact learning: If any "honest" class is exactly and properly learnable with polynomial query complexity, but not learnable in polynomial time, then P<F NaN> 6= NP. In particular, we show that an honest class is exactly polynomialquery learnable if and only if it is learnable using an oracle for \Sigma p 4 . 1 Introduction Today concept learning is studied under two rigorous frameworks which model t...
On learning monotone DNF under product distributions
 In Proceedings of the Fourteenth Annual Conference on Computational Learning Theory
, 2001
"... We show that the class of monotone 2 O( √ log n)term DNF formulae can be PAC learned in polynomial time under the uniform distribution from random examples only. This is an exponential improvement over the best previous polynomialtime algorithms in this model, which could learn monotone o(log 2 n) ..."
Abstract

Cited by 32 (11 self)
 Add to MetaCart
(Show Context)
We show that the class of monotone 2 O( √ log n)term DNF formulae can be PAC learned in polynomial time under the uniform distribution from random examples only. This is an exponential improvement over the best previous polynomialtime algorithms in this model, which could learn monotone o(log 2 n)term DNF. We also show that various classes of small constantdepth circuits which compute monotone functions are PAC learnable in polynomial time under the uniform distribution. All of our results extend to learning under any constantbounded product distribution.
Learning From a Consistently Ignorant Teacher
, 1994
"... One view of computational learning theory is that of a learner acquiring the knowledge of a teacher. We introduce a formal model of learning capturing the idea that teachers may have gaps in their knowledge. In particular, we consider learning from a teacher who labels examples "+" (a p ..."
Abstract

Cited by 24 (8 self)
 Add to MetaCart
One view of computational learning theory is that of a learner acquiring the knowledge of a teacher. We introduce a formal model of learning capturing the idea that teachers may have gaps in their knowledge. In particular, we consider learning from a teacher who labels examples "+" (a positive instance of the concept being learned), "\Gamma" (a negative instance of the concept being learned), and "?" (an instance with unknown classification), in such a way that knowledge of the concept class and all the positive and negative examples is not sufficient to determine the labelling of any of the examples labelled with "?". The goal of the learner is not to compensate for the ignorance of the teacher by attempting to infer "+" or "\Gamma" labels for the examples labelled with "?", but is rather to learn (an approximation to) the ternary labelling presented by the teacher. Thus, the goal of the learner is still to acquire the knowledge of the teacher, but now the learner must also ...
Simple Learning Algorithms for Decision Trees and Multivariate Polynomials
 In Proceedings of the 36th IEEE Symposium on the Foundations of Computer Science
, 1995
"... There were two techniques in the literature for learning multivariate polynomials and decision trees. These are learning decision trees under the uniform distribution via the Fourier Spectrum [Kushilevitz, Mansour 93 and Jackson 94] and learning decision trees and multivariate polynomials under a ..."
Abstract

Cited by 17 (4 self)
 Add to MetaCart
(Show Context)
There were two techniques in the literature for learning multivariate polynomials and decision trees. These are learning decision trees under the uniform distribution via the Fourier Spectrum [Kushilevitz, Mansour 93 and Jackson 94] and learning decision trees and multivariate polynomials under any distribution via Lattice Theory [Bshouty 94, Schapire and Sellie 93]. These two approaches are used for proving the learnability of many other interesting classes such as CDNF (poly size DNF and CNF) under any distribution and DNF and AC under the uniform distribution.
Redescription mining: Structure theory and algorithms
 In AAAI
, 2005
"... We introduce a new data mining problem—redescription mining—that unifies considerations of conceptual clustering, constructive induction, and logical formula discovery. Redescription mining begins with a collection of sets, views it as a propositional vocabulary, and identifies clusters of data tha ..."
Abstract

Cited by 16 (5 self)
 Add to MetaCart
(Show Context)
We introduce a new data mining problem—redescription mining—that unifies considerations of conceptual clustering, constructive induction, and logical formula discovery. Redescription mining begins with a collection of sets, views it as a propositional vocabulary, and identifies clusters of data that can be defined in at least two ways using this vocabulary. The primary contributions of this paper are conceptual and theoretical: (i) we formally study the space of redescriptions underlying a dataset and characterize their intrinsic structure, (ii) we identify impossibility as well as strong possibility results about when mining redescriptions is feasible, (iii) we present several scenarios of how we can custombuild redescription mining solutions for various biases, and (iv) we outline how many problems studied in the larger machine learning community are really special cases of redescription mining. By highlighting its broad scope and relevance, we aim to establish the importance of redescription mining and make the case for a thrust in this new line of research.
Learning a Circuit by Injecting Values
, 2005
"... We propose a new model for exact learning of acyclic circuits using experiments in which chosen values may be assigned to an arbitrary subset of wires internal to the circuit, but only the value of the circuit's single output wire may be observed. We give polynomial time algorithms to learn ( ..."
Abstract

Cited by 13 (5 self)
 Add to MetaCart
(Show Context)
We propose a new model for exact learning of acyclic circuits using experiments in which chosen values may be assigned to an arbitrary subset of wires internal to the circuit, but only the value of the circuit's single output wire may be observed. We give polynomial time algorithms to learn (1) arbitrary circuits with logarithmic depth and constant fanin and (2) Boolean circuits of constant depth and unbounded fanin over AND, OR, and NOT gates. Thus, both AC0 and NC1 circuits are learnable in polynomial time in this model. Negative results show that some restrictions on depth, fanin and gate types are necessary: exponentially many experiments
BLOSOM: A Framework for Mining Arbitrary Boolean Expressions over Attribute Sets
 IN: PROCEEDINGS OF THE 12TH INTERNATIONAL CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING (KDD 2006
, 2006
"... We introduce a novel framework (BLOSOM) for mining (frequent) boolean expressions over binaryvalued datasets. We organize the space of boolean expressions into four categories: pure conjunctions, pure disjunctions, conjunction of disjunctions, and disjunction of conjunctions. For each category, ..."
Abstract

Cited by 12 (5 self)
 Add to MetaCart
We introduce a novel framework (BLOSOM) for mining (frequent) boolean expressions over binaryvalued datasets. We organize the space of boolean expressions into four categories: pure conjunctions, pure disjunctions, conjunction of disjunctions, and disjunction of conjunctions. For each category, we propose a closure operator that naturally leads to the concept of a closed boolean expression. The closed expressions and their minimal generators give the most specific and most general boolean expressions that are satisfied by their corresponding object set. Further, the closed/minimal generator expressions form a lossless representation of all possible boolean expressions. BLOSSOM efficiently
Generating all maximal independent sets of boundeddegree hypergraphs
 In Proceedings of the Conference on Computational Learning Theory (COLT
, 1997
"... We show that any monotone function with a readk CNF representation can be learned in terms of its DNF representation with membership queries alone in time polynomial in the DNF size and n (the number of variables) assuming k is some fixed constant. The problem is motivated by the wellstudied ope ..."
Abstract

Cited by 12 (1 self)
 Add to MetaCart
We show that any monotone function with a readk CNF representation can be learned in terms of its DNF representation with membership queries alone in time polynomial in the DNF size and n (the number of variables) assuming k is some fixed constant. The problem is motivated by the wellstudied open problem of enumerating all maximal independent sets of a given hypergraph. Our algorithm gives a solution for the bounded degree case and works even if the hypergraph is not input, but rather only queries are available as to which sets are independent. 1
Learning
, 2004
"... random logdepth decision trees under the uniform distribution ..."
Abstract

Cited by 11 (0 self)
 Add to MetaCart
random logdepth decision trees under the uniform distribution