Results 1  10
of
33
Thinking May Be More Than Computing
 Cognition
, 1986
"... The uncomputable parts of thinking (if there are any) can be studied in much the same spirit that Turing (1950) suggested for the study of its computable parts. We can develop precise accounts of cognitive processes that, although they involve more than computing, can still be modelled on the machin ..."
Abstract

Cited by 18 (3 self)
 Add to MetaCart
The uncomputable parts of thinking (if there are any) can be studied in much the same spirit that Turing (1950) suggested for the study of its computable parts. We can develop precise accounts of cognitive processes that, although they involve more than computing, can still be modelled on the machines we call ‘computers’. In this paper, I want to suggest some ways that this might be done, using ideas from the mathematical theory of uncomputability (or Recursion Theory). And I want to suggest some uses to which the resulting models might be put. (The reader more interested in the models and their uses than the mathematics and its theorems, might want to skim or skip the mathematical parts.) 1.
Computational Limits on Team Identification of Languages
, 1993
"... A team of learning machines is essentially a multiset of learning machines. ..."
Abstract

Cited by 17 (7 self)
 Add to MetaCart
A team of learning machines is essentially a multiset of learning machines.
Learning Recursive Functions from Approximations
, 1995
"... Investigated is algorithmic learning, in the limit, of correct programs for recursive functions f from both input/output examples of f and several interesting varieties of approximate additional (algorithmic) information about f . Specifically considered, as such approximate additional informatio ..."
Abstract

Cited by 17 (7 self)
 Add to MetaCart
Investigated is algorithmic learning, in the limit, of correct programs for recursive functions f from both input/output examples of f and several interesting varieties of approximate additional (algorithmic) information about f . Specifically considered, as such approximate additional information about f , are Rose's frequency computations for f and several natural generalizations from the literature, each generalization involving programs for restricted trees of recursive functions which have f as a branch. Considered as the types of trees are those with bounded variation, bounded width, and bounded rank. For the case of learning final correct programs for recursive functions, EX learning, where the additional information involves frequency computations, an insightful and interestingly complex combinatorial characterization of learning power is presented as a function of the frequency parameters. For EX learning (as well as for BClearning, where a final sequence of cor...
On the Impact of Forgetting on Learning Machines
 Journal of the ACM
, 1993
"... this paper contributes toward the goal of understanding how a computer can be programmed to learn by isolating features of incremental learning algorithms that theoretically enhance their learning potential. In particular, we examine the effects of imposing a limit on the amount of information that ..."
Abstract

Cited by 10 (3 self)
 Add to MetaCart
this paper contributes toward the goal of understanding how a computer can be programmed to learn by isolating features of incremental learning algorithms that theoretically enhance their learning potential. In particular, we examine the effects of imposing a limit on the amount of information that learning algorithm can hold in its memory as it attempts to This work was facilitated by an international agreement under NSF Grant 9119540.
On identification by teams and probabilistic machines
 Lecture Notes in Artificial Intelligence
, 1995
"... ..."
Probabilistic and Team PFINtype Learning: General Properties
"... We consider the probability hierarchy for Popperian FINite learning and study the general properties of this hierarchy. We prove that the probability hierarchy is decidable, i.e. there exists an algorithm that receives p1 and p2 and answers whether PFINtype learning with the probability of success ..."
Abstract

Cited by 3 (3 self)
 Add to MetaCart
We consider the probability hierarchy for Popperian FINite learning and study the general properties of this hierarchy. We prove that the probability hierarchy is decidable, i.e. there exists an algorithm that receives p1 and p2 and answers whether PFINtype learning with the probability of success p1 is equivalent to PFINtype learning with the probability of success p2. To prove our result, we analyze the topological structure of the probability hierarchy. We prove that it is wellordered in descending ordering and orderequivalent to ordinal ffl0. This shows that the structure of the hierarchy is very complicated. Using similar methods, we also prove that, for PFINtype learning, team learning and probabilistic learning are of the same power.
Choosing a Learning Team: a Topological Approach
 in the Proceedings of the Conference \Operator Algebras and Mathematical Physics&quot;, Constantza 2001
, 1994
"... this paper we address the issue of how to compose teams. While this endeavor may sound like it belongs in the realm of psychology, it turns out that there are some interesting things that can be formally proved us This work was facilitated by an international agreement under NSF Grant 9119540. ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
this paper we address the issue of how to compose teams. While this endeavor may sound like it belongs in the realm of psychology, it turns out that there are some interesting things that can be formally proved us This work was facilitated by an international agreement under NSF Grant 9119540.
On Duality in Learning and the Selection of Learning Teams
 Information and Computation
, 1996
"... Previous work in inductive inference dealt mostly with finding one or several machines (IIMs) that successfully learn a collection of functions. Herein we start with a class of functions and consider the learner set of all IIMs that are successful at learning the given class. Applying this perspe ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
Previous work in inductive inference dealt mostly with finding one or several machines (IIMs) that successfully learn a collection of functions. Herein we start with a class of functions and consider the learner set of all IIMs that are successful at learning the given class. Applying this perspective to the case of team inference leads to the notion of diversification for a class of functions. This enables us to distinguish between several flavors of IIMs all of which must be represented in a team learning the given class. 2 1 Introduction All current theoretical approaches to machine learning tend to focus on a particular machine or a collection of machines and then find the class of concepts which can be learned by these machines under certain constraints defining a criterion of successful learning [AS83, OSW86]. In this paper we investigate the dual problem: Given some set of concepts, which algorithms can learn all those concepts? From [AGS89] we know that in the theory ...
Measure, Category and Learning Theory
"... Measure and category (or rather, their recursion theoretical counterparts) have been used in Theoretical Computer Science to make precise the intuitive notion "for most of the recursive sets." We use the Supported in part by NSF Grant CCR 9253582. y Supported in part by Latvian Counc ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
Measure and category (or rather, their recursion theoretical counterparts) have been used in Theoretical Computer Science to make precise the intuitive notion "for most of the recursive sets." We use the Supported in part by NSF Grant CCR 9253582. y Supported in part by Latvian Council of Science Grant 93.599 and NSF Grant 9119540. z Supported in part by NSF Grant 9301339. x Supported in part by NSF Grants 9119540 and 9301339.  Supported by the Deutsche Forschungsgemeinschaft (DFG) Grant Me 672/41. notions of effective measure and category to discuss the relative sizes of inferrible sets, and their complements. We find that inferrible sets become large rather quickly in the standard hierarchies of learnability. On the other hand, the complements of the learnable sets are all large. 1 Introduction Determining the relative size of denumerable sets, and those with cardinality @ 1 , led mathematicians do develop the notions of measure and category [Oxt71]. Described in t...