Results 1  10
of
72
The Power of Vacillation in Language Learning
, 1992
"... Some extensions are considered of Gold's influential model of language learning by machine from positive data. Studied are criteria of successful learning featuring convergence in the limit to vacillation between several alternative correct grammars. The main theorem of this paper is that there are ..."
Abstract

Cited by 44 (11 self)
 Add to MetaCart
Some extensions are considered of Gold's influential model of language learning by machine from positive data. Studied are criteria of successful learning featuring convergence in the limit to vacillation between several alternative correct grammars. The main theorem of this paper is that there are classes of languages that can be learned if convergence in the limit to up to (n+1) exactly correct grammars is allowed but which cannot be learned if convergence in the limit is to no more than n grammars, where the no more than n grammars can each make finitely many mistakes. This contrasts sharply with results of Barzdin and Podnieks and, later, Case and Smith, for learnability from both positive and negative data. A subset principle from a 1980 paper of Angluin is extended to the vacillatory and other criteria of this paper. This principle, provides a necessary condition for circumventing overgeneralization in learning from positive data. It is applied to prove another theorem to the eff...
The Structure of Complete Degrees
, 1990
"... This paper surveys investigations into how strong these commonalities are. More concretely, we are concerned with: What do NPcomplete sets look like? To what extent are the properties of particular NPcomplete sets, e.g., SAT, shared by all NPcomplete sets? If there are are structural differences ..."
Abstract

Cited by 30 (3 self)
 Add to MetaCart
This paper surveys investigations into how strong these commonalities are. More concretely, we are concerned with: What do NPcomplete sets look like? To what extent are the properties of particular NPcomplete sets, e.g., SAT, shared by all NPcomplete sets? If there are are structural differences between NPcomplete sets, what are they and what explains the differences? We make these questions, and the analogous questions for other complexity classes, more precise below. We need first to formalize NPcompleteness. There are a number of competing definitions of NPcompleteness. (See [Har78a, p. 7] for a discussion.) The most common, and the one we use, is based on the notion of mreduction, also known as polynomialtime manyone reduction and Karp reduction. A set A is mreducible to B if and only if there is a (total) polynomialtime computable function f such that for all x, x 2 A () f(x) 2 B: (1) 1
On the Intrinsic Complexity of Learning
 Information and Computation
, 1995
"... A new view of learning is presented. The basis of this view is a natural notion of reduction. We prove completeness and relative difficulty results. An infinite hierarchy of intrinsically more and more difficult to learn concepts is presented. Our results indicate that the complexity notion capt ..."
Abstract

Cited by 25 (6 self)
 Add to MetaCart
A new view of learning is presented. The basis of this view is a natural notion of reduction. We prove completeness and relative difficulty results. An infinite hierarchy of intrinsically more and more difficult to learn concepts is presented. Our results indicate that the complexity notion captured by our new notion of reduction differs dramatically from the traditional studies of the complexity of the algorithms performing learning tasks. 2 1 Introduction Traditional studies of inductive inference have focused on illuminating various strata of learnability based on varying the definition of learnability. The research following the Valiant's PAC model [Val84] and Angluin's teacher/learner model [Ang88] paid very careful attention to calculating the complexity of the learning algorithm. We present a new view of learning, based on the notion of reduction, that captures a different perspective on learning complexity than all prior studies. Based on our prelimanary reports, Jain...
Reasoning about equations and functional dependencies on complex objects
 IEEE Transactions on Data and Knowledge Engineering
, 1994
"... Virtually all semantic or objectoriented data models assume objects have an identity separate from any of their parts, and allow users to define complex object types in which part values may be any other objects. This often results in a choice of query language in which a user can express navigatin ..."
Abstract

Cited by 20 (4 self)
 Add to MetaCart
Virtually all semantic or objectoriented data models assume objects have an identity separate from any of their parts, and allow users to define complex object types in which part values may be any other objects. This often results in a choice of query language in which a user can express navigating from one object to another by following a property value path. In this paper, we consider a constraint language in which one may express equations and functional dependencies over complex object types. The language is novel in the sense that component attributes of individual constraints may correspond to property paths. The kind of equations we consider are also important since they are a natural abstraction of the class of conjunctive queries for query languages which support property value navigation. In our introductory comments, we give an example of such a query, and outline two applications of the constraint theory to problems relating to a choice of access plan for the query. We present a sound and complete axiomatization of the constraint language for the case in which interpretations are permitted to be infinite, where interpretations themselves correspond to a form of directed labeled graph. Although the implication problem for our form of equational constraint alone over arbitrary schema is undecidable, we present decision procedures for the implication problem for both kinds
Hypercomputation and the Physical ChurchTuring Thesis
, 2003
"... A version of the ChurchTuring Thesis states that every e#ectively realizable physical system can be defined by Turing Machines (`Thesis P'); in this formulation the Thesis appears an empirical, more than a logicomathematical, proposition. We review the main approaches to computation beyond Turing ..."
Abstract

Cited by 20 (0 self)
 Add to MetaCart
A version of the ChurchTuring Thesis states that every e#ectively realizable physical system can be defined by Turing Machines (`Thesis P'); in this formulation the Thesis appears an empirical, more than a logicomathematical, proposition. We review the main approaches to computation beyond Turing definability (`hypercomputation'): supertask, nonwellfounded, analog, quantum, and retrocausal computation. These models depend on infinite computation, explicitly or implicitly, and appear physically implausible; moreover, even if infinite computation were realizable, the Halting Problem would not be a#ected. Therefore, Thesis P is not essentially di#erent from the standard ChurchTuring Thesis.
Infinitary Self Reference in Learning Theory
, 1994
"... Kleene's Second Recursion Theorem provides a means for transforming any program p into a program e(p) which first creates a quiescent self copy and then runs p on that self copy together with any externally given input. e(p), in effect, has complete (low level) self knowledge, and p represents how ..."
Abstract

Cited by 18 (6 self)
 Add to MetaCart
Kleene's Second Recursion Theorem provides a means for transforming any program p into a program e(p) which first creates a quiescent self copy and then runs p on that self copy together with any externally given input. e(p), in effect, has complete (low level) self knowledge, and p represents how e(p) uses its self knowledge (and its knowledge of the external world). Infinite regress is not required since e(p) creates its self copy outside of itself. One mechanism to achieve this creation is a self replication trick isomorphic to that employed by singlecelled organisms. Another is for e(p) to look in a mirror to see which program it is. In 1974 the author published an infinitary generalization of Kleene's theorem which he called the Operator Recursion Theorem. It provides a means for obtaining an (algorithmically) growing collection of programs which, in effect, share a common (also growing) mirror from which they can obtain complete low level models of themselves and the other prog...
The intrinsic complexity of language identification
 Journal of Computer and System Sciences
, 1996
"... A new investigation of the complexity of language identification is undertaken using the notion of reduction from recursion theory and complexity theory. The approach, referred to as the intrinsic complexity of language identification, employs notions of ‘weak ’ and ‘strong ’ reduction between learn ..."
Abstract

Cited by 17 (7 self)
 Add to MetaCart
A new investigation of the complexity of language identification is undertaken using the notion of reduction from recursion theory and complexity theory. The approach, referred to as the intrinsic complexity of language identification, employs notions of ‘weak ’ and ‘strong ’ reduction between learnable classes of languages. The intrinsic complexity of several classes is considered and the results agree with the intuitive difficulty of learning these classes. Several complete classes are shown for both the reductions and it is also established that the weak and strong reductions are distinct. An interesting result is that the self referential class of Wiehagen in which the minimal element of every language is a grammar for the language and the class of pattern languages introduced by Angluin are equivalent in the strong sense. This study has been influenced by a similar treatment of function identification by Freivalds, Kinber, and Smith. 1
Synthesizing Enumeration Techniques For Language Learning
 In Proceedings of the Ninth Annual Conference on Computational Learning Theory
, 1996
"... this paper we assume, without loss of generality, that for all oe ` ø , [M(oe) 6=?] ) [M(ø) 6=?]. ..."
Abstract

Cited by 16 (7 self)
 Add to MetaCart
this paper we assume, without loss of generality, that for all oe ` ø , [M(oe) 6=?] ) [M(ø) 6=?].
The structure of intrinsic complexity of learning
 Journal of Symbolic Logic
, 1997
"... Limiting identification of r.e. indexes for r.e. languages (from a presentation of elements of the language) and limiting identification of programs for computable functions (from a graph of the function) have served as models for investigating the boundaries of learnability. Recently, a new approac ..."
Abstract

Cited by 15 (7 self)
 Add to MetaCart
Limiting identification of r.e. indexes for r.e. languages (from a presentation of elements of the language) and limiting identification of programs for computable functions (from a graph of the function) have served as models for investigating the boundaries of learnability. Recently, a new approach to the study of “intrinsic ” complexity of identification in the limit has been proposed. This approach, instead of dealing with the resource requirements of the learning algorithm, uses the notion of reducibility from recursion theory to compare and to capture the intuitive difficulty of learning various classes of concepts. Freivalds, Kinber, and Smith have studied this approach for function identification and Jain and Sharma have studied it for language identification. The present paper explores the structure of these reducibilities in the context of language identification. It is shown that there is an infinite hierarchy of language classes that represent learning problems of increasing difficulty. It is also shown that the language classes in this hierarchy are incomparable, under the reductions introduced, to the collection of pattern languages. Richness of the structure of intrinsic complexity is demonstrated by proving that any finite, acyclic, directed graph can be embedded in the reducibility structure. However, it is also established that this structure is not dense. The question of embedding any infinite, acyclic, directed graph is open. 1
SetDriven and RearrangementIndependent Learning of Recursive Languages
 MATHEMATICAL SYSTEMS THEORY
, 1996
"... The present paper deals with the learnability of indexed families of uniformly recursive languages from positive data under various postulates of naturalness. In particular, we consider setdriven and rearrangementindependent learners, i.e., learning devices whose output exclusively depends on the ..."
Abstract

Cited by 14 (13 self)
 Add to MetaCart
The present paper deals with the learnability of indexed families of uniformly recursive languages from positive data under various postulates of naturalness. In particular, we consider setdriven and rearrangementindependent learners, i.e., learning devices whose output exclusively depends on the range and on the range and length of their input, respectively. The impact of setdrivenness and rearrangementindependence on the behavior of learners to their learning power is studied in dependence on the hypothesis space the learners may use. Furthermore, we consider the influence of setdrivenness and rearrangementindependence for learning devices that realize the subset principle to different extents. Thereby we distinguish between strongmonotonic, monotonic and weakmonotonic or conservative learning. The results obtained are twofold. First, rearrangementindependent learning does not constitute a restriction except the case of monotonic learning. Second, we prove that for all but on...