Results 1  10
of
18
Elements of Scientific Inquiry
 A Companion to the Philosophy of Mind
, 1996
"... Algebra. AddisonWesley, Reading, Massachusetts, 1982. [Freivalds et al., 1995] R. Freivalds, E. Kinber, & C. H. Smith. On the Intrinsic Complexity of Learning. Information and Computation, 123(1):6471, 1995. [Fuhrmann, 1991] A. Fuhrmann. Theory contraction through base contraction. Journal ..."
Abstract

Cited by 28 (8 self)
 Add to MetaCart
Algebra. AddisonWesley, Reading, Massachusetts, 1982. [Freivalds et al., 1995] R. Freivalds, E. Kinber, & C. H. Smith. On the Intrinsic Complexity of Learning. Information and Computation, 123(1):6471, 1995. [Fuhrmann, 1991] A. Fuhrmann. Theory contraction through base contraction. Journal of Philosophical Logic, 20:175203, 1991. [Fulk & Jain, 1994] M. Fulk & S. Jain. Approximate inference and scientific method. Information and Computation, 1142:179191, 1994. [Fulk et al., 1994] M. Fulk, S. Jain, & D. Osherson. Open Problems in systems that learn. Journal of Computer and System Sciences, 49(3):589  604, 1994. BIBLIOGRAPHY 123 [Fulk, 1988] M. Fulk. Saving the phenomenon: Requirements that inductive machines not contradict known data. Inform. Comput., 79:193209, 1988. [Fulk, 1990] M. Fulk. Prudence and other conditions on formal language learning. Information and Computation, 85(1):111, 1990. [Gaifman & Snir, 1982] H. Gaifman & M. Snir. Probabilities over rich langu...
Ignoring Data May be the Only Way to Learn Efficiently
, 1994
"... In designing learning algorithms it seems quite reasonable to construct them in a way such that all data the algorithm already has obtained are correctly and completely reflected in the hypothesis the algorithm outputs on these data. However, this approach may totally fail, i.e., it may lead to t ..."
Abstract

Cited by 23 (13 self)
 Add to MetaCart
In designing learning algorithms it seems quite reasonable to construct them in a way such that all data the algorithm already has obtained are correctly and completely reflected in the hypothesis the algorithm outputs on these data. However, this approach may totally fail, i.e., it may lead to the unsolvability of the learning problem, or it may exclude any efficient solution of it. In particular, we present a natural learning problem and prove that it can be solved in polynomial time if and only if the algorithm is allowed to ignore data.
Inductive Program Synthesis for Therapy Plan Generation
 New Generation Computing
, 1996
"... . Planning is investigated in an area where classical Strips like approaches usually fail. The application domain is therapy (i.e. repair) for complex dynamic processes. The peculiarities of this domain are discussed in some detail for convincingly developing the characteristics of the inductive p ..."
Abstract

Cited by 10 (9 self)
 Add to MetaCart
(Show Context)
. Planning is investigated in an area where classical Strips like approaches usually fail. The application domain is therapy (i.e. repair) for complex dynamic processes. The peculiarities of this domain are discussed in some detail for convincingly developing the characteristics of the inductive planning approach presented. Plans are intended to be run for process therapy. Thus, plans are programs. Because of the unavoidable vagueness and uncertainty of information about complex dynamic processes in the case of disturbance, therapy plan generation turns out to be inductive program synthesis. There is developed a graphtheoretically based approach to inductive therapy plan generation. This approach is investigated from the inductive inference perspective. Particular emphasis is put on consistent and incremental learning of therapy plans. Basic application scenarios are developed and compared to each other. The inductive inference approach is invoked to develop and investigate a couple...
Training Sequences
"... this paper initiates a study in which it is demonstrated that certain concepts (represented by functions) can be learned, but only in the event that certain relevant subconcepts (also represented by functions) have been previously learned. In other words, the Soar project presents empirical evidence ..."
Abstract

Cited by 9 (1 self)
 Add to MetaCart
this paper initiates a study in which it is demonstrated that certain concepts (represented by functions) can be learned, but only in the event that certain relevant subconcepts (also represented by functions) have been previously learned. In other words, the Soar project presents empirical evidence that learning how to learn is viable for computers and this paper proves that doing so is the only way possible for computers to make certain inferences.
Learning in Friedberg Numberings
 Algorithmic Learning Theory: 18th International Conference, ALT 2007, Sendai, Japan, 2007, Proceedings. Springer, Lecture Notes in Artificial Intelligence
"... Abstract. In this paper we consider learnability in some special numberings, such as Friedberg numberings, which contain all the recursively enumerable languages, but have simpler grammar equivalence problem compared to acceptable numberings. We show that every explanatorily learnable class can be l ..."
Abstract

Cited by 7 (1 self)
 Add to MetaCart
Abstract. In this paper we consider learnability in some special numberings, such as Friedberg numberings, which contain all the recursively enumerable languages, but have simpler grammar equivalence problem compared to acceptable numberings. We show that every explanatorily learnable class can be learnt in some Friedberg numbering. However, such a result does not hold for behaviourally correct learning or finite learning. One can also show that some Friedberg numberings are so restrictive that all classes which can be explanatorily learnt in such Friedberg numberings have only finitely many infinite languages. We also study similar questions for several properties of learners such as consistency, conservativeness, prudence, iterativeness and non Ushaped learning. Besides Friedberg numberings, we also consider the above problems for programming systems with Krecursive grammar equivalence problem. 1
Ushaped learning may be necessary
"... Ushaped learning behaviour in cognitive development involves learning, unlearning and relearning. It occurs, for example, in learning irregular verbs. The prior cognitive science literature is occupied with how humans do it, for example, general rules versus tables of exceptions. This paper is mos ..."
Abstract

Cited by 7 (6 self)
 Add to MetaCart
Ushaped learning behaviour in cognitive development involves learning, unlearning and relearning. It occurs, for example, in learning irregular verbs. The prior cognitive science literature is occupied with how humans do it, for example, general rules versus tables of exceptions. This paper is mostly concerned with whether Ushaped learning behaviour may be necessary in the abstract mathematical setting of inductive inference, that is, in the computational learning theory following the framework of Gold. All notions considered are learning from text, that is, from positive data. Previous work showed that Ushaped learning behaviour is necessary for behaviourally correct learning but not for syntactically convergent, learning in the limit ( = explanatory learning). The present paper establishes the necessity for the whole hierarchy of classes of vacillatory learning where a behaviourally correct learner has to satisfy the additional constraint that it vacillates in the limit between at most k grammars, where k ≥ 1. Non Ushaped vacillatory learning is shown to be restrictive: Every non Ushaped vacillatorily learnable class is already learnable in the limit. Furthermore, if vacillatory learning with the parameter k = 2
Non UShaped Vacillatory and Team Learning
, 2008
"... Ushaped learning behaviour in cognitive development involves learning, unlearning and relearning. It occurs, for example, in learning irregular verbs. The prior cognitive science literature is occupied with how humans do it, for example, general rules versus tables of exceptions. This paper is most ..."
Abstract

Cited by 6 (2 self)
 Add to MetaCart
Ushaped learning behaviour in cognitive development involves learning, unlearning and relearning. It occurs, for example, in learning irregular verbs. The prior cognitive science literature is occupied with how humans do it, for example, general rules versus tables of exceptions. This paper is mostly concerned with whether Ushaped learning behaviour may be necessary in the abstract mathematical setting of inductive inference, that is, in the computational learning theory following the framework of Gold. All notions considered are learning from text, that is, from positive data. Previous work showed that Ushaped learning behaviour is necessary for behaviourally correct learning but not for syntactically convergent, learning in the limit ( = explanatory learning). The present paper establishes the necessity for the hierarchy of classes of vacillatory learning where a behaviourally correct learner has to satisfy the additional constraint that it vacillates in the limit between at most b grammars, where b ∈ {2, 3,...,∗}. Non Ushaped vacillatory learning is shown to be restrictive: every non Ushaped vacillatorily learnable class is already learnable in the limit. Furthermore, if vacillatory learning with the parameter b = 2 is possible then non Ushaped behaviourally correct learning is also possible. But for b = 3, surprisingly, there is a class witnessing that this implication fails.
On the Role of Search for Learning from Examples
 Journal of Experimental and Theoretical Artificial Intelligence
"... Gold [Gol67] discovered a fundamental enumeration technique, the socalled identificationbyenumeration, a simple but powerful class of algorithms for learning from examples (inductive inference). We introduce a variety of more sophisticated (and more powerful) enumeration techniques and charac ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
(Show Context)
Gold [Gol67] discovered a fundamental enumeration technique, the socalled identificationbyenumeration, a simple but powerful class of algorithms for learning from examples (inductive inference). We introduce a variety of more sophisticated (and more powerful) enumeration techniques and characterize their power. We conclude with the thesis that enumeration techniques are even universal in that each solvable learning problem in inductive inference can be solved by an adequate enumeration technique. This thesis is technically motivated and discussed. Keywords: Learning from examples, learning by search, identification by enumeration, enumeration techniques. Role of Search 1 1 Introduction The role of search, for learning from examples, is examined in a theoretical setting. Gold's seminal paper [Gol67] on inductive inference introduced a simple but powerful learning technique which became known as identificationby enumeration. Identificationbyenumeration begins with an infi...
Control Structures in Hypothesis Spaces: The Influence on Learning
"... . In any learnability setting, hypotheses are conjectured from some hypothesis space. Studied herein are the effects on learnability of the presence or absence of certain control structures in the hypothesis space. First presented are control structure characterizations of some rather specific but ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
. In any learnability setting, hypotheses are conjectured from some hypothesis space. Studied herein are the effects on learnability of the presence or absence of certain control structures in the hypothesis space. First presented are control structure characterizations of some rather specific but illustrative learnability results. Then presented are the main theorems. Each of these characterizes the invariance of a learning class over hypothesis space V (and a little more about V ) as: V has suitable instances of all denotational control structures. 1 Introduction In any learnability setting, hypotheses are conjectured from some hypothesis space, for example, in [OSW86] from general purpose programming systems, in [ZL95, Wie78] from subrecursive systems, and in [Qui92] from very simple classes of classificatory decision trees. 3 Much is known theoretically about the restrictions on learning power resulting from restricted hypothesis spaces [ZL95]. In the present paper we begin to...
Learning by Erasing
 IN &QUOT;PROC. 7TH INT. WORKSHOP ON ALGORITHMIC LEARNING THEORY,&QUOT; LECTURE NOTES IN ARTIFICIAL INTELLIGENCE
, 1996
"... Learning by erasing means the process of eliminating potential hypotheses from further consideration thereby converging to the least hypothesis never eliminated and this one must be a solution to the actual learning problem. The present paper deals with learnability by erasing of indexed families ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
Learning by erasing means the process of eliminating potential hypotheses from further consideration thereby converging to the least hypothesis never eliminated and this one must be a solution to the actual learning problem. The present paper deals with learnability by erasing of indexed families L of languages from both positive data as well as positive and negative data. This refers to the following scenario. A family L of target languages and a hypothesis space for it are specified. The learner is fed eventually all positive examples (all labeled examples) of an unknown target language L chosen from L. The target language L is learned by erasing if the learner erases some set of possible hypotheses and the least hypothesis never erased correctly describes L. The capabilities of learning by erasing are investigated in dependence on the requirement of what sets of hypotheses have to be or may be, erased, and in dependence of the choice of the hypothesis space. Class prese...