Results 1 
3 of
3
Hierarchies of Probabilistic and Team FINLearning
 Theoretical Computer Science
, 1998
"... A FINlearning machine M receives successive values of the function f it is learning and at some moment outputs a conjecture which should be a correct index of f . FIN learning has 2 extensions: (1) If M flips fair coins and learns a function with certain probability p, we have FINhpilearning. ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
A FINlearning machine M receives successive values of the function f it is learning and at some moment outputs a conjecture which should be a correct index of f . FIN learning has 2 extensions: (1) If M flips fair coins and learns a function with certain probability p, we have FINhpilearning. (2) When n machines simultaneously try to learn the same function f and at least k of these machines output correct indices of f , we have learning by a [k; n]FIN team. Sometimes a team or a probabilistic learner can simulate another one, if their probabilities p 1 ; p 2 (or team success ratios k 1 =n 1 ; k 2 =n 2 ) are close enough [DKV92a, DK96]. On the other hand, there are cutpoints r which make simulation of FINhp 2 i by FINhp 1 i impossible whenever p 2 r ! p 1 . Cutpoints above 10=21 are known [DK96]. We show that the problem for given k i ; n i to determine whether [k 1 ; n 1 ]FIN ` [k 2 ; n 2 ]FIN is algorithmically solvable. The set of all FIN cutpoints is shown to b...
On the Role of Search for Learning from Examples
 Journal of Experimental and Theoretical Artificial Intelligence
"... Gold [Gol67] discovered a fundamental enumeration technique, the socalled identificationbyenumeration, a simple but powerful class of algorithms for learning from examples (inductive inference). We introduce a variety of more sophisticated (and more powerful) enumeration techniques and charac ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
Gold [Gol67] discovered a fundamental enumeration technique, the socalled identificationbyenumeration, a simple but powerful class of algorithms for learning from examples (inductive inference). We introduce a variety of more sophisticated (and more powerful) enumeration techniques and characterize their power. We conclude with the thesis that enumeration techniques are even universal in that each solvable learning problem in inductive inference can be solved by an adequate enumeration technique. This thesis is technically motivated and discussed. Keywords: Learning from examples, learning by search, identification by enumeration, enumeration techniques. Role of Search 1 1 Introduction The role of search, for learning from examples, is examined in a theoretical setting. Gold's seminal paper [Gol67] on inductive inference introduced a simple but powerful learning technique which became known as identificationby enumeration. Identificationbyenumeration begins with an infi...
Probabilistic and Team PFINtype Learning: General Properties
"... We consider the probability hierarchy for Popperian FINite learning and study the general properties of this hierarchy. We prove that the probability hierarchy is decidable, i.e. there exists an algorithm that receives p1 and p2 and answers whether PFINtype learning with the probability of success ..."
Abstract

Cited by 3 (3 self)
 Add to MetaCart
We consider the probability hierarchy for Popperian FINite learning and study the general properties of this hierarchy. We prove that the probability hierarchy is decidable, i.e. there exists an algorithm that receives p1 and p2 and answers whether PFINtype learning with the probability of success p1 is equivalent to PFINtype learning with the probability of success p2. To prove our result, we analyze the topological structure of the probability hierarchy. We prove that it is wellordered in descending ordering and orderequivalent to ordinal ffl0. This shows that the structure of the hierarchy is very complicated. Using similar methods, we also prove that, for PFINtype learning, team learning and probabilistic learning are of the same power.