Results 1 
6 of
6
Inductive Inference with Procrastination: Back to Definitions
 Fundamenta Informaticae
, 1999
"... In this paper, we reconsider the denition of procrastinating learning machines. In the original denition of Freivalds and Smith [FS93], constructive ordinals are used to bound mindchanges. We investigate possibility of using arbitrary linearly ordered sets to bound mindchanges in similar way. It ..."
Abstract

Cited by 8 (2 self)
 Add to MetaCart
In this paper, we reconsider the denition of procrastinating learning machines. In the original denition of Freivalds and Smith [FS93], constructive ordinals are used to bound mindchanges. We investigate possibility of using arbitrary linearly ordered sets to bound mindchanges in similar way. It turns out that using certain ordered sets it is possible to dene inductive inference types dierent from the previously known ones. We investigate properties of the new inductive inference types and compare them to other types. This research was supported by Latvian Science Council Grant No.93.599 and NSF Grant 9421640. Some of the results from this paper were presented earlier [AFS96]. y The third author was supported in part by NSF Grant 9301339. 1 Introduction We study inductive inference using the model developed by Gold [Gol67]. There is a well known hierarchy of larger and larger classes of learnable sets of phenomena based on the number of time a learning machine is all...
On the Classification of Computable Languages
, 1997
"... A onesided classifier for a given class of languages converges to 1 on every language from the class and outputs 0 innitely often on languages outside the class. A twosided classifier, on the other hand, converges to 1 on languages from the class and converges to 0 on languages outside the clas ..."
Abstract

Cited by 6 (4 self)
 Add to MetaCart
A onesided classifier for a given class of languages converges to 1 on every language from the class and outputs 0 innitely often on languages outside the class. A twosided classifier, on the other hand, converges to 1 on languages from the class and converges to 0 on languages outside the class. The present paper investigates onesided and twosided classification for classes of computable languages. Theorems are presented that help assess the classifiability of natural classes. The relationships of classification to inductive learning theory and to structural complexity theory in terms of Turing degrees are studied. Furthermore, the special case of classification from only positive data is also investigated.
Trees and Learning
 Proceedings of the Ninth Conference on Computational Learning Theory (COLT) ACMPress
, 1996
"... We characterize FIN, EX and BClearning, as well as the corresponding notions of team learning, in terms of isolated branches on uniformly strongly recursive sequences of trees. Further, the more restrictive models of FINlearning and strongmonotonic BClearning can be characterized in terms of i ..."
Abstract

Cited by 5 (5 self)
 Add to MetaCart
We characterize FIN, EX and BClearning, as well as the corresponding notions of team learning, in terms of isolated branches on uniformly strongly recursive sequences of trees. Further, the more restrictive models of FINlearning and strongmonotonic BClearning can be characterized in terms of isolated branches on a single tree. We discuss learning with additional information where the learner receives an index for a strongly recursive tree such that the function to be learned is isolated on this tree. We show that EXlearning with this type of additional information is strictly more powerful than EXlearning. 1 Introduction Inductive inference [1, 2, 4, 6, 10] deals with learning classes of recursive functions in the limit under certain convergence constraints. The most general setting is that of behaviorally correct learning (BC): for each prefix f(0)f(1) : : : f(n) of the recursive function f , the learner guesses a program for f ; the learner succeeds if Universit¨at Heidel...
Transformations That Preserve Learnability
 Algorithmic Learning Theory: Seventh International Workshop (ALT ’96), volume 1160 of Lecture Notes in Artificial Intelligence
, 1996
"... . We consider transformations (performed by general recursive operators) mapping recursive functions into recursive functions. These transformations can be considered as mapping sets of recursive functions into sets of recursive functions. A transformation is said to be preserving the identicati ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
. We consider transformations (performed by general recursive operators) mapping recursive functions into recursive functions. These transformations can be considered as mapping sets of recursive functions into sets of recursive functions. A transformation is said to be preserving the identication type I, if the transformation always maps Iidentiable sets into Iidentiable sets. There are transformations preserving FIN but not EX, and there are transformations preserving EX but not FIN. However, transformations preserving EX i always preserve EX j for j < i. 1 Introduction In his academic lecture (1872) before getting professorship in Erlangen university Felix Klein (18491925) designed an astonishing program how to remake geometry. The listeners were confused and even shocked. In this program (nowadays known as Erlangen program) geometry was considered as \what remains invariant under motion transformations". It seemed unbelievable that a geometry textbook could have no ...
Learning State Machines in the Robot Moving Context
, 2001
"... In this paper we introduce a method for use state machines as topological maps for remembering the landmarks used to navigate the robot in the static environment. Also we explain how to generalize the state machine model to simplify the search and localization. This generalization process also allow ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
In this paper we introduce a method for use state machines as topological maps for remembering the landmarks used to navigate the robot in the static environment. Also we explain how to generalize the state machine model to simplify the search and localization. This generalization process also allows us to remove redundancy from our topological map. We find the shortest path between landmarks, represented as states in a state machine, using breadthfirst method for the search. For localizing the robot, we use a simple statistical method. For learning and generalizing the topological map, we use the RPNI (Regular Positive and Negative Inference) algorithm, which learns regular finite automata. We present the experimental results and explain how the environment was modelled.
Counting Extensional Differences in BCLearning
"... Let BC be the model of behaviourally correct function learning as introduced by Barzdins [4] and Case and Smith [8]. We introduce a mind change hierarchy for BC, counting the number of extensional differences in the hypotheses of a learner. We compare the resulting models BC n to models from the lit ..."
Abstract
 Add to MetaCart
Let BC be the model of behaviourally correct function learning as introduced by Barzdins [4] and Case and Smith [8]. We introduce a mind change hierarchy for BC, counting the number of extensional differences in the hypotheses of a learner. We compare the resulting models BC n to models from the literature and discuss confidence, team learning, and finitely defective hypotheses. Among other things, we prove that there is a tradeoff between the number of semantic mind changes and the number of anomalies in the hypotheses. We also discuss consequences for language learning. In particular we show that, in contrast to the case of function learning, the family of classes that are confidently BClearnable from text is not closed under finite unions.