Results 11  20
of
113
Synthesizing Enumeration Techniques For Language Learning
 In Proceedings of the Ninth Annual Conference on Computational Learning Theory
, 1996
"... this paper we assume, without loss of generality, that for all oe ` ø , [M(oe) 6=?] ) [M(ø) 6=?]. ..."
Abstract

Cited by 16 (7 self)
 Add to MetaCart
this paper we assume, without loss of generality, that for all oe ` ø , [M(oe) 6=?] ) [M(ø) 6=?].
SetDriven and RearrangementIndependent Learning of Recursive Languages
 MATHEMATICAL SYSTEMS THEORY
, 1996
"... The present paper deals with the learnability of indexed families of uniformly recursive languages from positive data under various postulates of naturalness. In particular, we consider setdriven and rearrangementindependent learners, i.e., learning devices whose output exclusively depends on the ..."
Abstract

Cited by 16 (13 self)
 Add to MetaCart
The present paper deals with the learnability of indexed families of uniformly recursive languages from positive data under various postulates of naturalness. In particular, we consider setdriven and rearrangementindependent learners, i.e., learning devices whose output exclusively depends on the range and on the range and length of their input, respectively. The impact of setdrivenness and rearrangementindependence on the behavior of learners to their learning power is studied in dependence on the hypothesis space the learners may use. Furthermore, we consider the influence of setdrivenness and rearrangementindependence for learning devices that realize the subset principle to different extents. Thereby we distinguish between strongmonotonic, monotonic and weakmonotonic or conservative learning. The results obtained are twofold. First, rearrangementindependent learning does not constitute a restriction except the case of monotonic learning. Second, we prove that for all but on...
COMPUTATION WITHOUT REPRESENTATION
, 2006
"... The received view is that computational states are individuated at least in part by their semantic properties. I offer an alternative, according to which computational states are individuated by their functional properties. Functional properties are specified by a mechanistic explanation without ap ..."
Abstract

Cited by 15 (5 self)
 Add to MetaCart
The received view is that computational states are individuated at least in part by their semantic properties. I offer an alternative, according to which computational states are individuated by their functional properties. Functional properties are specified by a mechanistic explanation without appealing to any semantic properties. The primary purpose of this paper is to formulate the alternative view of computational individuation, point out that it supports a robust notion of computational explanation, and defend it on the grounds of how computational states are individuated within computability theory and computer science. A secondary purpose is to show that existing arguments for the semantic view are defective.
On the Impact of Forgetting on Learning Machines
 Journal of the ACM
, 1993
"... this paper contributes toward the goal of understanding how a computer can be programmed to learn by isolating features of incremental learning algorithms that theoretically enhance their learning potential. In particular, we examine the effects of imposing a limit on the amount of information that ..."
Abstract

Cited by 15 (5 self)
 Add to MetaCart
(Show Context)
this paper contributes toward the goal of understanding how a computer can be programmed to learn by isolating features of incremental learning algorithms that theoretically enhance their learning potential. In particular, we examine the effects of imposing a limit on the amount of information that learning algorithm can hold in its memory as it attempts to This work was facilitated by an international agreement under NSF Grant 9119540.
Elementary formal systems, intrinsic complexity, and procrastination
 Information and Computation
, 1997
"... Recently, rich subclasses of elementary formal systems (EFS) have been shown to be identifiable in the limit from only positive data. Examples of these classes are Angluin’s pattern languages, unions of pattern languages by Wright and Shinohara, and classes of languages definable by lengthbounded e ..."
Abstract

Cited by 14 (6 self)
 Add to MetaCart
Recently, rich subclasses of elementary formal systems (EFS) have been shown to be identifiable in the limit from only positive data. Examples of these classes are Angluin’s pattern languages, unions of pattern languages by Wright and Shinohara, and classes of languages definable by lengthbounded elementary formal systems studied by Shinohara. The present paper employs two distinct bodies of abstract studies in the inductive inference literature to analyze the learnability of these concrete classes. The first approach, introduced by Freivalds and Smith, uses constructive ordinals to bound the number of mind changes. ω denotes the first limit ordinal. An ordinal mind change bound of ω means that identification can be carried out by a learner that after examining some element(s) of the language announces an upper bound on the number of mind changes it will make before converging; a bound of ω · 2 means that the learner reserves the right to revise this upper bound once; a bound of ω · 3 means the learner reserves the right to revise this upper bound twice, and so on. A bound of ω 2 means that identification can be carried out by a learner that announces an upper bound on the number of times it may revise its conjectured upper bound on the number of mind changes. It is shown in the present paper that the ordinal mind change complexity for identification of languages formed by unions of up to n pattern languages is ω n. It is
Complexity issues for vacillatory function identification
 Information and Computation
, 1995
"... It was previously shown by Barzdin and Podnieks that one does not increase the power of learning programs for functions by allowing learning algorithms to converge to a finite set of correct programs instead of requiring them to converge to a single correct program. In this paper we define some new, ..."
Abstract

Cited by 12 (10 self)
 Add to MetaCart
It was previously shown by Barzdin and Podnieks that one does not increase the power of learning programs for functions by allowing learning algorithms to converge to a finite set of correct programs instead of requiring them to converge to a single correct program. In this paper we define some new, subtle, but natural concepts of mind change complexity for function learning and show that, if one bounds this complexity for learning algorithms, then, by contrast with Barzdin and Podnieks result, there are interesting and sometimes complicated tradeoffs between these complexity bounds, bounds on the number of final correct programs, and learning power. CR Classification Number: I.2.6 (Learning – Induction). 1
On learning limiting programs
 International Journal of Foundations of Computer Science
, 1992
"... Machine learning of limit programs (i.e., programs allowed finitely many mind changes about their legitimate outputs) for computable functions is studied. Learning of iterated limit programs is also studied. To partially motivate these studies, it is shown that, in some cases, interesting global pr ..."
Abstract

Cited by 11 (5 self)
 Add to MetaCart
(Show Context)
Machine learning of limit programs (i.e., programs allowed finitely many mind changes about their legitimate outputs) for computable functions is studied. Learning of iterated limit programs is also studied. To partially motivate these studies, it is shown that, in some cases, interesting global properties of computable functions can be proved from suitable (n + 1)iterated limit programs for them which can not be proved from any niterated limit programs for them. It is shown that learning power is increased when (n + 1)iterated limit programs rather than niterated limit programs are to be learned. Many tradeoff results are obtained regarding learning power, number (possibly zero) of limits taken, program size constraints and information, and number of errors tolerated in final programs learned.
The 01 Knapsack Problem  An Introductory Survey
, 1996
"... The 01 Knapsack problem has been studied extensively during the past four decades. The reason is that it appears in many real domains with practical importance. Although its NPcompleteness, many algorithms have been proposed that exhibit impressive behavior in the average case. This paper introduc ..."
Abstract

Cited by 10 (0 self)
 Add to MetaCart
(Show Context)
The 01 Knapsack problem has been studied extensively during the past four decades. The reason is that it appears in many real domains with practical importance. Although its NPcompleteness, many algorithms have been proposed that exhibit impressive behavior in the average case. This paper introduces the problem and presents a proof that it belongs to the NPcomplete class, as well as a list of directly related problems. An overview of the previous research and the most known algorithms for solving it are presented. The paper concludes with a reference to the practical situations where the problem arises.
Formal Specification and Verification of Asynchronously Communicating Web Services
, 2004
"... Copyright c ○ 2004 by ..."
(Show Context)
Learning in the presence of inaccurate information
 in &quot;Proceedings of the 2nd Annual ACM Conference on Computational Learning Theory
, 1989
"... The present paper considers the effects of introducing inaccuracies in a learner’s environment in Gold’s learning model of identification in the limit. Three kinds of inaccuracies are considered: presence of spurious data is modeled as learning from a noisy environment, missing data is modeled as ..."
Abstract

Cited by 9 (3 self)
 Add to MetaCart
The present paper considers the effects of introducing inaccuracies in a learner’s environment in Gold’s learning model of identification in the limit. Three kinds of inaccuracies are considered: presence of spurious data is modeled as learning from a noisy environment, missing data is modeled as learning from incomplete environment, and the presence of a mixture of both spurious and missing data is modeled as learning from imperfect environment. Two learning domains are considered, namely, identification of programs from graphs of computable functions and identification of grammars from positive data about recursively enumerable languages. Many hierarchies and tradeoffs resulting from the interplay between the number of errors allowed in the final hypotheses, the number of inaccuracies in the data, the types of inaccuracies, and the type of success criteria are derived. An interesting result is that in the context of function learning, incomplete data is strictly worse for learning than noisy data. 1