Results 1  10
of
50
How Many Queries are Needed to Learn?
, 1996
"... We investigate the query complexity of exact learning in the membership and (proper) equivalence query model. We give a complete characterization of concept classes that are learnable with a polynomial number of polynomial sized queries in this model. We give applications of this characterization, i ..."
Abstract

Cited by 65 (8 self)
 Add to MetaCart
We investigate the query complexity of exact learning in the membership and (proper) equivalence query model. We give a complete characterization of concept classes that are learnable with a polynomial number of polynomial sized queries in this model. We give applications of this characterization, including results on learning a natural subclass of DNF formulas, and on learning with membership queries alone. Query complexity has previously been used to prove lower bounds on the time complexity of exact learning. We show a new relationship between query complexity and time complexity in exact learning: If any "honest" class is exactly and properly learnable with polynomial query complexity, but not learnable in polynomial time, then P<F NaN> 6= NP. In particular, we show that an honest class is exactly polynomialquery learnable if and only if it is learnable using an oracle for \Sigma p 4 . 1 Introduction Today concept learning is studied under two rigorous frameworks which model t...
On learning monotone DNF under product distributions
 In Proceedings of the Fourteenth Annual Conference on Computational Learning Theory
, 2001
"... We show that the class of monotone 2 O( √ log n)term DNF formulae can be PAC learned in polynomial time under the uniform distribution from random examples only. This is an exponential improvement over the best previous polynomialtime algorithms in this model, which could learn monotone o(log 2 n) ..."
Abstract

Cited by 32 (15 self)
 Add to MetaCart
We show that the class of monotone 2 O( √ log n)term DNF formulae can be PAC learned in polynomial time under the uniform distribution from random examples only. This is an exponential improvement over the best previous polynomialtime algorithms in this model, which could learn monotone o(log 2 n)term DNF. We also show that various classes of small constantdepth circuits which compute monotone functions are PAC learnable in polynomial time under the uniform distribution. All of our results extend to learning under any constantbounded product distribution.
Learning From a Consistently Ignorant Teacher
, 1994
"... One view of computational learning theory is that of a learner acquiring the knowledge of a teacher. We introduce a formal model of learning capturing the idea that teachers may have gaps in their knowledge. In particular, we consider learning from a teacher who labels examples "+" (a p ..."
Abstract

Cited by 22 (8 self)
 Add to MetaCart
One view of computational learning theory is that of a learner acquiring the knowledge of a teacher. We introduce a formal model of learning capturing the idea that teachers may have gaps in their knowledge. In particular, we consider learning from a teacher who labels examples "+" (a positive instance of the concept being learned), "\Gamma" (a negative instance of the concept being learned), and "?" (an instance with unknown classification), in such a way that knowledge of the concept class and all the positive and negative examples is not sufficient to determine the labelling of any of the examples labelled with "?". The goal of the learner is not to compensate for the ignorance of the teacher by attempting to infer "+" or "\Gamma" labels for the examples labelled with "?", but is rather to learn (an approximation to) the ternary labelling presented by the teacher. Thus, the goal of the learner is still to acquire the knowledge of the teacher, but now the learner must also ...
Simple Learning Algorithms for Decision Trees and Multivariate Polynomials
 In Proceedings of the 36th IEEE Symposium on the Foundations of Computer Science
, 1995
"... There were two techniques in the literature for learning multivariate polynomials and decision trees. These are learning decision trees under the uniform distribution via the Fourier Spectrum [Kushilevitz, Mansour 93 and Jackson 94] and learning decision trees and multivariate polynomials under a ..."
Abstract

Cited by 17 (5 self)
 Add to MetaCart
There were two techniques in the literature for learning multivariate polynomials and decision trees. These are learning decision trees under the uniform distribution via the Fourier Spectrum [Kushilevitz, Mansour 93 and Jackson 94] and learning decision trees and multivariate polynomials under any distribution via Lattice Theory [Bshouty 94, Schapire and Sellie 93]. These two approaches are used for proving the learnability of many other interesting classes such as CDNF (poly size DNF and CNF) under any distribution and DNF and AC under the uniform distribution.
Redescription mining: Structure theory and algorithms
 In AAAI
, 2005
"... We introduce a new data mining problem—redescription mining—that unifies considerations of conceptual clustering, constructive induction, and logical formula discovery. Redescription mining begins with a collection of sets, views it as a propositional vocabulary, and identifies clusters of data tha ..."
Abstract

Cited by 12 (4 self)
 Add to MetaCart
We introduce a new data mining problem—redescription mining—that unifies considerations of conceptual clustering, constructive induction, and logical formula discovery. Redescription mining begins with a collection of sets, views it as a propositional vocabulary, and identifies clusters of data that can be defined in at least two ways using this vocabulary. The primary contributions of this paper are conceptual and theoretical: (i) we formally study the space of redescriptions underlying a dataset and characterize their intrinsic structure, (ii) we identify impossibility as well as strong possibility results about when mining redescriptions is feasible, (iii) we present several scenarios of how we can custombuild redescription mining solutions for various biases, and (iv) we outline how many problems studied in the larger machine learning community are really special cases of redescription mining. By highlighting its broad scope and relevance, we aim to establish the importance of redescription mining and make the case for a thrust in this new line of research.
Learning a circuit by injecting values
, 2008
"... We propose a new model for exact learning of acyclic circuits using experiments in which chosen values may be assigned to an arbitrary subset of wires internal to the circuit, but only the value of the circuit’s single output wire may be observed. We give polynomial time algorithms to learn (1) arbi ..."
Abstract

Cited by 11 (5 self)
 Add to MetaCart
We propose a new model for exact learning of acyclic circuits using experiments in which chosen values may be assigned to an arbitrary subset of wires internal to the circuit, but only the value of the circuit’s single output wire may be observed. We give polynomial time algorithms to learn (1) arbitrary circuits with logarithmic depth and constant fanin and (2) Boolean circuits of constant depth and unbounded fanin over AND, OR, and NOT gates. Thus, both AC0 and NC1 circuits are learnable in polynomial time in this model. Negative results show that some restrictions on depth, fanin and gate types are necessary: exponentially many experiments are required to learn AND/OR circuits of unbounded depth and fanin; it is NPhard to learn AND/OR circuits of unbounded depth and fanin 2; and it is NPhard to learn circuits of constant depth and unbounded fanin over AND, OR, and threshold gates, even when the target circuit is known to contain at most one threshold gate and that threshold gate has threshold 2. We also consider the effect of adding an oracle for behavioral equivalence. In this case there are polynomialtime algorithms to learn arbitrary circuits of constant fanin and unbounded depth and to learn Boolean circuits with arbitrary fanin and unbounded depth over AND, OR, and NOT gates. A corollary is that these two classes are PAClearnable if experiments are available. Finally, we consider an extension of the model called the synchronous model. We show that an even more general class of circuits are learnable in this model. In particular, we are able to learn circuits with cycles.
Generating all maximal independent sets of boundeddegree hypergraphs
 In Proceedings of the Conference on Computational Learning Theory (COLT
, 1997
"... We show that any monotone function with a readk CNF representation can be learned in terms of its DNF representation with membership queries alone in time polynomial in the DNF size and n (the number of variables) assuming k is some fixed constant. The problem is motivated by the wellstudied ope ..."
Abstract

Cited by 10 (1 self)
 Add to MetaCart
We show that any monotone function with a readk CNF representation can be learned in terms of its DNF representation with membership queries alone in time polynomial in the DNF size and n (the number of variables) assuming k is some fixed constant. The problem is motivated by the wellstudied open problem of enumerating all maximal independent sets of a given hypergraph. Our algorithm gives a solution for the bounded degree case and works even if the hypergraph is not input, but rather only queries are available as to which sets are independent. 1
Computing Intersections of Horn Theories for Reasoning with Models
 Artificial Intelligence
, 1998
"... We consider computational issues when combining logical knowledge bases represented by their characteristic models; in particular, we study taking their logical intersection. ..."
Abstract

Cited by 9 (9 self)
 Add to MetaCart
We consider computational issues when combining logical knowledge bases represented by their characteristic models; in particular, we study taking their logical intersection.
Exact Learning when Irrelevant Variables Abound
, 1999
"... We prove the following results. Any Boolean function of O(logn) relevant variables can be exactly learned with a set of nonadaptive membership queries alone and a minimum sized decision tree representation of the function constructed, in polynomial time. In contrast, such a function cannot be exact ..."
Abstract

Cited by 9 (2 self)
 Add to MetaCart
We prove the following results. Any Boolean function of O(logn) relevant variables can be exactly learned with a set of nonadaptive membership queries alone and a minimum sized decision tree representation of the function constructed, in polynomial time. In contrast, such a function cannot be exactly learned with equivalence queries alone using general decision trees and other representation classes as hypotheses. Our results imply others which may be of independent interest. We show that truthtable minimization of decision trees can be done in polynomial time, complementing the wellknown result of Masek that truthtable minimization of DNF formulas is NPhard. The proofs of our negative results show that general decision trees and related representations are not learnable in polynomial time using equivalence queries alone, confirming a folklore theorem. 1 Introduction Exact learning using queries is a wellstudied model in computational learning. Of the many kinds of queries consi...
Learning largealphabet and analog circuits with value injection queries
 In the 20th Annual Conference on Learning Theory
, 2007
"... Abstract. We consider the problem of learning an acyclic discrete circuit with n wires, fanin bounded by k and alphabet size s using value injection queries. For the class of transitively reduced circuits, we develop the Distinguishing Paths Algorithm, that learns such a circuit using (ns) O(k) val ..."
Abstract

Cited by 8 (6 self)
 Add to MetaCart
Abstract. We consider the problem of learning an acyclic discrete circuit with n wires, fanin bounded by k and alphabet size s using value injection queries. For the class of transitively reduced circuits, we develop the Distinguishing Paths Algorithm, that learns such a circuit using (ns) O(k) value injection queries and time polynomial in the number of queries. We describe a generalization of the algorithm to the class of circuits with shortcut width bounded by b that uses (ns) O(k+b) value injection queries. Both algorithms use value injection queries that fix only O(kd) wires, where d is the depth of the target circuit. We give a reduction showing that without such restrictions on the topology of the circuit, the learning problem may be computationally intractable when s = n Θ(1) , even for circuits of depth O(log n). We then apply our largealphabet learning algorithms to the problem of approximate learning of analog circuits whose gate functions satisfy a Lipschitz condition. Finally, we consider models in which behavioral equivalence queries are also available, and extend and improve the learning algorithms of [5] to handle general classes of gates functions that are polynomial time learnable from counterexamples. 1