Results 1 
9 of
9
The Weighted Majority Algorithm
, 1994
"... We study the construction of prediction algorithms in a situation in which a learner faces a sequence of trials, with a prediction to be made in each, and the goal of the learner is to make few mistakes. We are interested in the case that the learner has reason to believe that one of some pool of kn ..."
Abstract

Cited by 678 (39 self)
 Add to MetaCart
We study the construction of prediction algorithms in a situation in which a learner faces a sequence of trials, with a prediction to be made in each, and the goal of the learner is to make few mistakes. We are interested in the case that the learner has reason to believe that one of some pool of known algorithms will perform well, but the learner does not know which one. A simple and effective method, based on weighted voting, is introduced for constructing a compound algorithm in such a circumstance. We call this method the Weighted Majority Algorithm. We show that this algorithm is robust in the presence of errors in the data. We discuss various versions of the Weighted Majority Algorithm and prove mistake bounds for them that are closely related to the mistake bounds of the best algorithms of the pool. For example, given a sequence of trials, if there is an algorithm in the pool A that makes at most m mistakes then the Weighted Majority Algorithm will make at most c(log jAj + m) mi...
Infinitary Self Reference in Learning Theory
, 1994
"... Kleene's Second Recursion Theorem provides a means for transforming any program p into a program e(p) which first creates a quiescent self copy and then runs p on that self copy together with any externally given input. e(p), in effect, has complete (low level) self knowledge, and p represents how ..."
Abstract

Cited by 18 (6 self)
 Add to MetaCart
Kleene's Second Recursion Theorem provides a means for transforming any program p into a program e(p) which first creates a quiescent self copy and then runs p on that self copy together with any externally given input. e(p), in effect, has complete (low level) self knowledge, and p represents how e(p) uses its self knowledge (and its knowledge of the external world). Infinite regress is not required since e(p) creates its self copy outside of itself. One mechanism to achieve this creation is a self replication trick isomorphic to that employed by singlecelled organisms. Another is for e(p) to look in a mirror to see which program it is. In 1974 the author published an infinitary generalization of Kleene's theorem which he called the Operator Recursion Theorem. It provides a means for obtaining an (algorithmically) growing collection of programs which, in effect, share a common (also growing) mirror from which they can obtain complete low level models of themselves and the other prog...
On identification by teams and probabilistic machines
 Lecture Notes in Artificial Intelligence
, 1995
"... ..."
Probabilistic Limit Identification up to "Small" Sets?
"... In this paper we study limit identification of total recursive functions in the case when "small" sets of errors are allowed. Here the notion of "small" sets we formalize in a very general way, i.e.we define a notion of measure for subsets of natural numbers, and we consider as being small those set ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
In this paper we study limit identification of total recursive functions in the case when "small" sets of errors are allowed. Here the notion of "small" sets we formalize in a very general way, i.e.we define a notion of measure for subsets of natural numbers, and we consider as being small those sets, which are subsets of sets with zero measure. We study relations between classes of functions identifiable up to "small" sets for different choices of measure. In particular, we focus our attention on properties of probabilistic limit identification. We show that regardless of particular measure we always can identify a strictly larger class of functions with probability 1=(n + 1) than with probability 1=n. Besides that, for computable measures we show that, if there do not exist sets with an arbitrary small non zero measure, then identifiability of a set of functions with probability larger than 1=(n+1) implies also identifiability of the same set with probability 1=n. Otherwise (in the case when there exist sets with an arbitrary small non zero measure), we always can identify a strictly larger class of functions with probability (n+1)=(2n+1) than with probability n=(2n; 1), and identifiability with probability larger than (n +1)=(2n + 1) implies also identi ability with probability n=(2n; 1).
The Complexity of Learning SUBSEQ(A)
"... Abstract. Higman showed that if A is any language then SUBSEQ(A) is regular, where SUBSEQ(A) is the language of all subsequences of strings in A. We consider the following inductive inference problem: given A(ε), A(0), A(1), A(00),... learn, in the limit, a DFA for SUBSEQ(A). We consider this model ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
Abstract. Higman showed that if A is any language then SUBSEQ(A) is regular, where SUBSEQ(A) is the language of all subsequences of strings in A. We consider the following inductive inference problem: given A(ε), A(0), A(1), A(00),... learn, in the limit, a DFA for SUBSEQ(A). We consider this model of learning and the variants of it that are usually studied in inductive inference: anomalies, mindchanges, and teams. 1
Learning Recursive Functions: A Survey
, 2008
"... Studying the learnability of classes of recursive functions has attracted considerable interest for at least four decades. Starting with Gold’s (1967) model of learning in the limit, many variations, modifications and extensions have been proposed. These models differ in some of the following: the m ..."
Abstract
 Add to MetaCart
Studying the learnability of classes of recursive functions has attracted considerable interest for at least four decades. Starting with Gold’s (1967) model of learning in the limit, many variations, modifications and extensions have been proposed. These models differ in some of the following: the mode of convergence, the requirements intermediate hypotheses have to fulfill, the set of allowed learning strategies, the source of information available to the learner during the learning process, the set of admissible hypothesis spaces, and the learning goals. A considerable amount of work done in this field has been devoted to the characterization of function classes that can be learned in a given model, the influence of natural, intuitive postulates on the resulting learning power, the incorporation of randomness into the learning process, the complexity of learning, among others. On the occasion of Rolf Wiehagen’s 60th birthday, the last four decades of research in that area are surveyed, with a special focus on Rolf Wiehagen’s work, which has made him one of the most influential scientists in the theory of learning recursive functions.