Results 1  10
of
20
The Power of Vacillation in Language Learning
, 1992
"... Some extensions are considered of Gold's influential model of language learning by machine from positive data. Studied are criteria of successful learning featuring convergence in the limit to vacillation between several alternative correct grammars. The main theorem of this paper is that there ..."
Abstract

Cited by 46 (13 self)
 Add to MetaCart
(Show Context)
Some extensions are considered of Gold's influential model of language learning by machine from positive data. Studied are criteria of successful learning featuring convergence in the limit to vacillation between several alternative correct grammars. The main theorem of this paper is that there are classes of languages that can be learned if convergence in the limit to up to (n+1) exactly correct grammars is allowed but which cannot be learned if convergence in the limit is to no more than n grammars, where the no more than n grammars can each make finitely many mistakes. This contrasts sharply with results of Barzdin and Podnieks and, later, Case and Smith, for learnability from both positive and negative data. A subset principle from a 1980 paper of Angluin is extended to the vacillatory and other criteria of this paper. This principle, provides a necessary condition for circumventing overgeneralization in learning from positive data. It is applied to prove another theorem to the eff...
Infinitary Self Reference in Learning Theory
, 1994
"... Kleene's Second Recursion Theorem provides a means for transforming any program p into a program e(p) which first creates a quiescent self copy and then runs p on that self copy together with any externally given input. e(p), in effect, has complete (low level) self knowledge, and p represents ..."
Abstract

Cited by 19 (6 self)
 Add to MetaCart
Kleene's Second Recursion Theorem provides a means for transforming any program p into a program e(p) which first creates a quiescent self copy and then runs p on that self copy together with any externally given input. e(p), in effect, has complete (low level) self knowledge, and p represents how e(p) uses its self knowledge (and its knowledge of the external world). Infinite regress is not required since e(p) creates its self copy outside of itself. One mechanism to achieve this creation is a self replication trick isomorphic to that employed by singlecelled organisms. Another is for e(p) to look in a mirror to see which program it is. In 1974 the author published an infinitary generalization of Kleene's theorem which he called the Operator Recursion Theorem. It provides a means for obtaining an (algorithmically) growing collection of programs which, in effect, share a common (also growing) mirror from which they can obtain complete low level models of themselves and the other prog...
Synthesizing Enumeration Techniques For Language Learning
 In Proceedings of the Ninth Annual Conference on Computational Learning Theory
, 1996
"... this paper we assume, without loss of generality, that for all oe ` ø , [M(oe) 6=?] ) [M(ø) 6=?]. ..."
Abstract

Cited by 16 (7 self)
 Add to MetaCart
this paper we assume, without loss of generality, that for all oe ` ø , [M(oe) 6=?] ) [M(ø) 6=?].
Complexity issues for vacillatory function identification
 Information and Computation
, 1995
"... It was previously shown by Barzdin and Podnieks that one does not increase the power of learning programs for functions by allowing learning algorithms to converge to a finite set of correct programs instead of requiring them to converge to a single correct program. In this paper we define some new, ..."
Abstract

Cited by 12 (10 self)
 Add to MetaCart
It was previously shown by Barzdin and Podnieks that one does not increase the power of learning programs for functions by allowing learning algorithms to converge to a finite set of correct programs instead of requiring them to converge to a single correct program. In this paper we define some new, subtle, but natural concepts of mind change complexity for function learning and show that, if one bounds this complexity for learning algorithms, then, by contrast with Barzdin and Podnieks result, there are interesting and sometimes complicated tradeoffs between these complexity bounds, bounds on the number of final correct programs, and learning power. CR Classification Number: I.2.6 (Learning – Induction). 1
On learning limiting programs
 International Journal of Foundations of Computer Science
, 1992
"... Machine learning of limit programs (i.e., programs allowed finitely many mind changes about their legitimate outputs) for computable functions is studied. Learning of iterated limit programs is also studied. To partially motivate these studies, it is shown that, in some cases, interesting global pr ..."
Abstract

Cited by 11 (5 self)
 Add to MetaCart
(Show Context)
Machine learning of limit programs (i.e., programs allowed finitely many mind changes about their legitimate outputs) for computable functions is studied. Learning of iterated limit programs is also studied. To partially motivate these studies, it is shown that, in some cases, interesting global properties of computable functions can be proved from suitable (n + 1)iterated limit programs for them which can not be proved from any niterated limit programs for them. It is shown that learning power is increased when (n + 1)iterated limit programs rather than niterated limit programs are to be learned. Many tradeoff results are obtained regarding learning power, number (possibly zero) of limits taken, program size constraints and information, and number of errors tolerated in final programs learned.
Robust Learning Aided by Context
 In Proceedings of the Eleventh Annual Conference on Computational Learning Theory
, 1998
"... Empirical studies of multitask learning provide some evidence that the performance of a learning system on its intended targets improves by presenting to the learning system related tasks, also called contexts, as additional input. Angluin, Gasarch, and Smith, as well as Kinber, Smith, Velauthapilla ..."
Abstract

Cited by 10 (6 self)
 Add to MetaCart
Empirical studies of multitask learning provide some evidence that the performance of a learning system on its intended targets improves by presenting to the learning system related tasks, also called contexts, as additional input. Angluin, Gasarch, and Smith, as well as Kinber, Smith, Velauthapillai, and Wiehagen have provided mathematical justification for this phenomenon in the inductive inference framework. However, their proofs rely heavily on selfreferential coding tricks, that is, they directly code the solution of the learning problem into the context. Fulk has shown that for the Ex and Bcanomaly hierarchies, such results, which rely on selfreferential coding tricks, may not hold robustly. In this work we analyze robust versions of learning aided by context and show that  in contrast to Fulk's result above  the robust versions of This work was carried out while J. Case, S. Jain, M. Ott, and F. Stephan were visiting the School of Computer Science and Engineering at ...
Looking for an analogue of Rice's Theorem in circuit complexity theory
 Mathematical Logic Quarterly
, 1989
"... Abstract. Rice’s Theorem says that every nontrivial semantic property of programs is undecidable. In this spirit we show the following: Every nontrivial absolute (gap, relative) counting property of circuits is UPhard with respect to polynomialtime Turing reductions. For generators [31] we show a ..."
Abstract

Cited by 7 (1 self)
 Add to MetaCart
(Show Context)
Abstract. Rice’s Theorem says that every nontrivial semantic property of programs is undecidable. In this spirit we show the following: Every nontrivial absolute (gap, relative) counting property of circuits is UPhard with respect to polynomialtime Turing reductions. For generators [31] we show a perfect analogue of Rice’s Theorem. Mathematics Subject Classification: 03D15, 68Q15. Keywords: Rice’s Theorem, Counting problems, Promise classes, UPhard, NPhard, generators.
Spatial/Kinematic Domain and Lattice Computers
 JOURNAL OF EXPERIMENTAL AND THEORETICAL ARTIFICIAL INTELLIGENCE
, 1994
"... An approach to analogical representation for objects and their motions in space is proposed. This approach involves lattice computer architectures and associated algorithms and is shown to be abstracted from the behavior of human beings mentally solving spatial /kinematic puzzles. There is also dis ..."
Abstract

Cited by 4 (4 self)
 Add to MetaCart
An approach to analogical representation for objects and their motions in space is proposed. This approach involves lattice computer architectures and associated algorithms and is shown to be abstracted from the behavior of human beings mentally solving spatial /kinematic puzzles. There is also discussion of where in this approach the modeling of human cognition leaves off and the engineering begins. The possible relevance of the approach to a number of issues in Artificial Intelligence is discussed. These issues include efficiency of sentential versus analogical representations, common sense reasoning, update propagation, learning performance tasks, diagrammatic representations, spatial reasoning, metaphor, human categorization, and pattern recognition. Lastly there is a discussion of the somewhat related approach involving cellular automata applied to computational physics.
Control Structures in Hypothesis Spaces: The Influence on Learning
"... . In any learnability setting, hypotheses are conjectured from some hypothesis space. Studied herein are the effects on learnability of the presence or absence of certain control structures in the hypothesis space. First presented are control structure characterizations of some rather specific but ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
. In any learnability setting, hypotheses are conjectured from some hypothesis space. Studied herein are the effects on learnability of the presence or absence of certain control structures in the hypothesis space. First presented are control structure characterizations of some rather specific but illustrative learnability results. Then presented are the main theorems. Each of these characterizes the invariance of a learning class over hypothesis space V (and a little more about V ) as: V has suitable instances of all denotational control structures. 1 Introduction In any learnability setting, hypotheses are conjectured from some hypothesis space, for example, in [OSW86] from general purpose programming systems, in [ZL95, Wie78] from subrecursive systems, and in [Qui92] from very simple classes of classificatory decision trees. 3 Much is known theoretically about the restrictions on learning power resulting from restricted hypothesis spaces [ZL95]. In the present paper we begin to...
Adventures in time and space
 33th ACM Symposium on Principles of Programming Languages
, 2006
"... Abstract. This paper investigates what is essentially a callbyvalue version of PCF under a complexitytheoretically motivated type system. The programming formalism, ATR, has its firstorder programs characterize the polynomialtime computable functions, and its secondorder programs characterize ..."
Abstract

Cited by 4 (4 self)
 Add to MetaCart
Abstract. This paper investigates what is essentially a callbyvalue version of PCF under a complexitytheoretically motivated type system. The programming formalism, ATR, has its firstorder programs characterize the polynomialtime computable functions, and its secondorder programs characterize the type2 basic feasible functionals of Mehlhorn and of Cook and Urquhart. (The ATRtypes are confined to levels 0, 1, and 2.) The type system comes in two parts, one that primarily restricts the sizes of values of expressions and a second that primarily restricts the time required to evaluate expressions. The sizerestricted part is motivated by Bellantoni and Cook’s and Leivant’s implicit characterizations of polynomialtime. The timerestricting part is an affine version of Barber and Plotkin’s DILL. Two semantics are constructed for ATR. The first is a pruning of the naïve denotational semantics for ATR. This pruning removes certain functions that cause otherwise feasible forms of recursion to go wrong. The second semantics is a model for ATR’s time complexity relative to a certain abstract machine. This model provides a setting for complexity recurrences arising from ATR recursions, the solutions of which yield secondorder polynomial time bounds. The timecomplexity semantics is also shown to be sound relative to the costs of interpretation on the abstract machine. 1.