Results 1 
9 of
9
Incremental concept learning for bounded data mining
 Information and Computation
, 1999
"... Important re nements of concept learning in the limit from positive data considerably restricting the accessibility of input data are studied. Let c be any concept; every in nite sequence of elements exhausting c is called positive presentation of c. In all learning models considered the learning ma ..."
Abstract

Cited by 39 (29 self)
 Add to MetaCart
Important re nements of concept learning in the limit from positive data considerably restricting the accessibility of input data are studied. Let c be any concept; every in nite sequence of elements exhausting c is called positive presentation of c. In all learning models considered the learning machine computes a sequence of hypotheses about the target concept from a positive presentation of it. With iterative learning, the learning machine, in making a conjecture, has access to its previous conjecture and the latest data item coming in. In kbounded examplememory inference (k is a priori xed) the learner is allowed to access, in making a conjecture, its previous hypothesis, its memory of up to k data items it has already seen, and the next element coming in. In the case of kfeedback identi cation, the learning machine, in making a conjecture, has access to its previous conjecture, the latest data item coming in, and, on the basis of this information, it can compute k items and query the database of previous data to nd out, for each of the k items, whether or not it is in the database (k is again a priori xed). In all cases, the sequence of conjectures has to converge to a hypothesis
Learning OneVariable Pattern Languages Very Efficiently on Average, in Parallel, and by Asking Queries
, 1997
"... A pattern is a finite string of constant and variable symbols. The language generated by a pattern is the set of all strings of constant symbols which can be obtained from the pattern by substituting nonempty strings for variables. We study the learnability of onevariable pattern languages in the ..."
Abstract

Cited by 17 (8 self)
 Add to MetaCart
A pattern is a finite string of constant and variable symbols. The language generated by a pattern is the set of all strings of constant symbols which can be obtained from the pattern by substituting nonempty strings for variables. We study the learnability of onevariable pattern languages in the limit with respect to the update time needed for computing a new single hypothesis and the expected total learning time taken until convergence to a correct hypothesis. Our results are as follows. First, we design a consistent and setdriven learner that, using the concept of descriptive patterns, achieves update time O(n 2 log n), where n is the size of the input sample. The best previously known algorithm for computing descriptive onevariable patterns requires time O(n 4 log n) (cf. Angluin [2]). Second, we give a parallel version of this algorithm that requires time O(log n) and O(n 3 = log n) processors on an EREWPRAM. Third, using a modified version of the sequential algorithm a...
Lange and Wiehagen's Pattern Language Learning Algorithm: An AverageCase Analysis with respect to its Total Learning Time
 Annals of Mathematics and Artificial Intelligence
, 1998
"... The present paper deals with the bestcase, worstcase and averagecase behavior of Lange and Wiehagen's (1991) pattern language learning algorithm with respect to its total learning time. Pattern languages have been introduced by Angluin (1980) and are defined as follows: Let A = f0; 1; : ::g be an ..."
Abstract

Cited by 6 (4 self)
 Add to MetaCart
The present paper deals with the bestcase, worstcase and averagecase behavior of Lange and Wiehagen's (1991) pattern language learning algorithm with respect to its total learning time. Pattern languages have been introduced by Angluin (1980) and are defined as follows: Let A = f0; 1; : ::g be any nonempty finite alphabet containing at least two elements. Furthermore, let X = fx i i 2 INg be an infinite set of variables such that A " X =;. Patterns are nonempty strings over A [ X. L(), the language generated by pattern is the set of strings which can be obtained by substituting nonnull strings from A 3 for the variables of the pattern. Lange and Wiehagen's (1991) algorithm learns the class of all pattern languages in the limit from text. We analyze this algorithm with respect to its total learning time behavior, i.e., the overall time taken by the algorithm until convergence. For every pattern containing k different variables it is shown that the total learning time is O(jj 2 log jAj (jAj + k)) in the bestcase and unbounded in the worstcase. Furthermore, we estimate the expectation of the total learning time. In particular, it is shown that Lange and Wiehagen's algorithm possesses an expected total learning time of O(2 k
A complete and tight averagecase analysis of learning monomials
 IN PROC. 16TH INT'L SYMPOS. ON THEORETICAL ASPECTS OF COMPUTER SCIENCE, STACS'99
, 1999
"... We advocate to analyze the average complexity of learning problems. An appropriate framework for this purpose is introduced. Based on it we consider the problem of learning monomials and the special case of learning monotone monomials in the limit and for online predictions in two variants: from ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
We advocate to analyze the average complexity of learning problems. An appropriate framework for this purpose is introduced. Based on it we consider the problem of learning monomials and the special case of learning monotone monomials in the limit and for online predictions in two variants: from positive data only, and from positive and negative examples. The wellknown Wholist algorithm is completely analyzed, in particular its averagecase behavior with respect to the class of binomial distributions. We consider different complexity measures: the number of mind changes, the number of prediction errors, and the total learning time. Tight bounds are obtained implying that worst case bounds are too pessimistic. On the average learning can be achieved exponentially faster. Furthermore, we study a new learning model, stochastic finite learning, in which, in contrast to PAC learning, some information about the underlying distribution is given and the goal is to find a correct (not only approximatively correct) hypothesis. We develop techniques to obtain good bounds for stochastic finite learning from a precise average case analysis of strategies for learning in the limit and illustrate our approach for the case of learning monomials.
Efficient Learning of OneVariable Pattern Languages from Positive Data
, 1996
"... A pattern is a finite string of constant and variable symbols. The language generated by a pattern is the set of all strings of constant symbols which can be obtained from the pattern by substituting nonempty strings for variables. Descriptive patterns are a key concept for inductive inference o ..."
Abstract

Cited by 3 (3 self)
 Add to MetaCart
A pattern is a finite string of constant and variable symbols. The language generated by a pattern is the set of all strings of constant symbols which can be obtained from the pattern by substituting nonempty strings for variables. Descriptive patterns are a key concept for inductive inference of pattern languages. A pattern is descriptive for a given sample if the sample is contained in the language L() generated by and no other pattern having this property generates a proper subset of the language L(). The best previously known algorithm for computing descriptive onevariable patterns requires time O(n log n), where n is the size of the sample. We present a simpler and more efficient algorithm solving the same problem in time O(n log n). In addition, we give a parallel version of this algorithm that requires time O(log n) and O(n = log n) processors on an EREWPRAM. Previously, no parallel algorithm was known for this problem. Using a
Variants of Iterative Learning
, 1998
"... We investigate the principal learning capabilities of iterative learners in some more details. Thereby, we confine ourselves to study the learnability of indexable concept classes. The general scenario of iterative learning is as follows. An iterative learner successively takes as input one element ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
We investigate the principal learning capabilities of iterative learners in some more details. Thereby, we confine ourselves to study the learnability of indexable concept classes. The general scenario of iterative learning is as follows. An iterative learner successively takes as input one element of a text (an informant) for a target concept as well as its previously made hypothesis and outputs a new hypothesis about the target concept. The sequence of hypotheses has to converge to a hypothesis correctly describing the target concept.
From Learning in the Limit to Stochastic Finite Learning
, 2005
"... Inductive inference can be considered as one of the fundamental paradigms of algorithmic learning theory. We survey results recently obtained and show their impact to potential applications. Since the main focus is put on the efficiency of learning, we also deal with postulates of naturalness and th ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Inductive inference can be considered as one of the fundamental paradigms of algorithmic learning theory. We survey results recently obtained and show their impact to potential applications. Since the main focus is put on the efficiency of learning, we also deal with postulates of naturalness and their impact to the efficiency of limit learners. In particular, we look at the learnability of the class of all pattern languages and ask whether or not one can design a learner within the paradigm of learning in the limit that is nevertheless efficient. For achieving this goal, we deal with iterative learning and its interplay with the hypothesis spaces allowed. This interplay has also a severe impact to postulates of naturalness satisfiable by any learner. Furthermore, since a limit learner is only supposed to converge, one never knows at any particular learning stage whether or not the learner did already succeed. The resulting uncertainty may be prohibitive in many applications. We survey results to resolve this problem by outlining a new learning model, called stochastic finite learning. Though pattern languages can neither be finitely inferred from positive data nor PAClearned, our approach can be extended to a stochastic finite learner that exactly infers all pattern languages from positive data with high confidence. Finally, we apply the techniques developed to the problem of learning conjunctive concepts.
Inductive Inference of Approximations for Recursive Concepts
, 2005
"... This paper provides a systematic study of inductive inference of indexable concept classes in learning scenarios where the learner is successful if its final hypothesis describes a finite variant of the target concept, i.e., learning with anomalies. Learning from positive data only and from both pos ..."
Abstract
 Add to MetaCart
This paper provides a systematic study of inductive inference of indexable concept classes in learning scenarios where the learner is successful if its final hypothesis describes a finite variant of the target concept, i.e., learning with anomalies. Learning from positive data only and from both positive and negative data is distinguished. The following learning models are studied: learning in the limit, finite identification, setdriven learning, conservative inference, and behaviorally correct learning. The attention is focused on the case that the number of allowed anomalies is finite but not a priori bounded. However, results for the special case of learning with an a priori bounded number of anomalies are presented, too. Characterizations of the learning models with anomalies in terms of finite telltale sets are provided. The observed varieties in the degree of recursiveness of the relevant telltale sets are already sufficient to quantify the differences in the corresponding learning models with anomalies. Finally, a complete picture concerning the relations of all models of learning with and without anomalies mentioned above is derived.
Learning Approximations of Recursive Concepts
, 2001
"... This paper provides a systematic study of inductive inference of indexable concept classes in learning scenarios where the learner is successful if its final hypothesis describes a finite variant of the target concept, i.e., learning with anomalies. Learning from positive data only and from both ..."
Abstract
 Add to MetaCart
This paper provides a systematic study of inductive inference of indexable concept classes in learning scenarios where the learner is successful if its final hypothesis describes a finite variant of the target concept, i.e., learning with anomalies. Learning from positive data only and from both positive and negative data is distinguished. The following learning models are studied: learning in the limit, finite identification, setdriven learning, conservative inference, and behaviorally correct learning. The attention is focused on the case that the number of allowed anomalies is finite but not a priori bounded. However, results for the special case of learning with an a priori bounded number of anomalies are presented, too. Characterizations of the learning models with anomalies in terms of finite telltale sets are provided. The observed varieties in the degree of recursiveness of the relevant telltale sets are already sufficient to quantify the differences in the corr...