Results 1 
2 of
2
Probabilistic and Team PFINtype Learning: General Properties
"... We consider the probability hierarchy for Popperian FINite learning and study the general properties of this hierarchy. We prove that the probability hierarchy is decidable, i.e. there exists an algorithm that receives p1 and p2 and answers whether PFINtype learning with the probability of success ..."
Abstract

Cited by 3 (3 self)
 Add to MetaCart
(Show Context)
We consider the probability hierarchy for Popperian FINite learning and study the general properties of this hierarchy. We prove that the probability hierarchy is decidable, i.e. there exists an algorithm that receives p1 and p2 and answers whether PFINtype learning with the probability of success p1 is equivalent to PFINtype learning with the probability of success p2. To prove our result, we analyze the topological structure of the probability hierarchy. We prove that it is wellordered in descending ordering and orderequivalent to ordinal ffl0. This shows that the structure of the hierarchy is very complicated. Using similar methods, we also prove that, for PFINtype learning, team learning and probabilistic learning are of the same power.
Learning from Streams
"... Learning from streams is a process in which a group of learners separately obtain information about the target to be learned, but they can communicate with each other in order to learn the target. We are interested in machine models for learning from streams and study its learning power (as measure ..."
Abstract
 Add to MetaCart
(Show Context)
Learning from streams is a process in which a group of learners separately obtain information about the target to be learned, but they can communicate with each other in order to learn the target. We are interested in machine models for learning from streams and study its learning power (as measured by the collection of learnable classes). We study how the power of learning from streams depends on the two parameters m and n, where n is the number of learners which track a single stream of input each and m is the number of learners (among the n learners) which have to find, in the limit, the right description of the target. We study for which combinations m, n and m ′ , n ′ the following inclusion holds: Every class learnable from streams with parameters m, n is also learnable from streams with parameters m ′ , n ′. For the learning of uniformly recursive classes, we get a full characterization which depends only on the ratio m; but for general classes the picture is more complin cated. Most of the noninclusions in team learning carry over to noninclusions with the same parameters in the case of learning from streams; but only few inclusions are preserved and some additional noninclusions hold. Besides this, we also relate learning from streams to various other closely related and wellstudied forms of learning: iterative learning from text, learning from incomplete text and learning from noisy text.