Results 1  10
of
35
Two Experiments on Learning Probabilistic Dependency Grammars from Corpora
 Working Notes of the Workshop StatisticallyBased NLP Techniques
, 1992
"... Introduction We present a scheme for learning probabilistic dependency grammars from positive training examples plus constraints on rules. In particular we present the results of two experiments. The first, in which the constraints were minimal, was unsuccessful. The second, with significant constr ..."
Abstract

Cited by 96 (5 self)
 Add to MetaCart
Introduction We present a scheme for learning probabilistic dependency grammars from positive training examples plus constraints on rules. In particular we present the results of two experiments. The first, in which the constraints were minimal, was unsuccessful. The second, with significant constraints, was successful within the bounds of the task we had set. We will explicate dependency grammars in Section 2. For the moment we simply note that they are a very restricted class of grammars which do not fit exactly into the Chomsky hierarchy, but whose appearance is most like the contextfree grammars. We assume that the goal of learning a contextfree grammar needs no justification. The problem has attracted a fair amount of attention, ( [1,4] are good surveys. ) but no good solutions have been found. Our choice of learning from only positive training examples needs only a little more justification. Obviously, if it is possible, a scheme which only uses positive training exampl
Inductive Inference, DFAs and Computational Complexity
 2nd Int. Workshop on Analogical and Inductive Inference (AII
, 1989
"... This paper surveys recent results concerning the inference of deterministic finite automata (DFAs). The results discussed determine the extent to which DFAs can be feasibly inferred, and highlight a number of interesting approaches in computational learning theory. 1 ..."
Abstract

Cited by 78 (1 self)
 Add to MetaCart
This paper surveys recent results concerning the inference of deterministic finite automata (DFAs). The results discussed determine the extent to which DFAs can be feasibly inferred, and highlight a number of interesting approaches in computational learning theory. 1
Diversitybased Inference of Finite Automata
 Journal of ACM
, 1994
"... Abstract. We present new procedures for inferring the structure of a finitestate automaton (FSA) from its input \ output behavior, using access to the automaton to perform experiments. Our procedures use a new representation for finite automata, based on the notion of equivalence between tesfs. We ..."
Abstract

Cited by 73 (1 self)
 Add to MetaCart
Abstract. We present new procedures for inferring the structure of a finitestate automaton (FSA) from its input \ output behavior, using access to the automaton to perform experiments. Our procedures use a new representation for finite automata, based on the notion of equivalence between tesfs. We call the number of such equivalence classes the diLersL@of the automaton; the diversity may be as small as the logarithm of the number of states of the automaton. For the special class of pennatatton aatornata, we describe an inference procedure that runs in time polynomial in the diversity and log(l/6), where 8 is a given upper bound on the probability that our procedure returns an incorrect result. (Since our procedure uses randomization to perform experiments, there is a certain controllable chance that it will return an erroneous result.) We also discuss techniques for handling more general automata. We present evidence for the practical efficiency of our approach. For example, our procedure is able to infer the structure of an automaton based on Rubik’s Cube (which has approximately 10 lY states) in about 2 minutes on a DEC MicroVax. This automaton is many orders of magnitude larger than possible with previous techniques, which would require time proportional at least to the number of global states. (Note that in this example, only a small fraction (1014, of the global
Inductive Synthesis of Recursive Logic Programs
, 1997
"... The inductive synthesis of recursive logic programs from incomplete information, such as input/output examples, is a challenging subfield both of ILP (Inductive Logic Programming) and of the synthesis (in general) of logic programs from formal specifications. We first overview past and present achie ..."
Abstract

Cited by 34 (8 self)
 Add to MetaCart
The inductive synthesis of recursive logic programs from incomplete information, such as input/output examples, is a challenging subfield both of ILP (Inductive Logic Programming) and of the synthesis (in general) of logic programs from formal specifications. We first overview past and present achievements, focusing on the techniques that were designed specifically for the inductive synthesis of recursive logic programs, but also discussing a few general ILP techniques that can also induce nonrecursive hypotheses. Then we analyse the prospects of these techniques in this task, investigating their applicability to software engineering as well as to knowledge acquisition and discovery.
New Error Bounds for Solomonoff Prediction
 Journal of Computer and System Sciences
, 1999
"... Several new relations between universal Solomonoff sequence prediction and informed prediction and general probabilistic prediction schemes will be proved. Among others, they show that the number of errors in Solomonoff prediction is finite for computable prior probability, if finite in the informed ..."
Abstract

Cited by 23 (16 self)
 Add to MetaCart
Several new relations between universal Solomonoff sequence prediction and informed prediction and general probabilistic prediction schemes will be proved. Among others, they show that the number of errors in Solomonoff prediction is finite for computable prior probability, if finite in the informed case, where the prior is known. Deterministic variants will also be studied. The most interesting result is that the deterministic variant of Solomonoff prediction is optimal compared to any other probabilistic or deterministic prediction scheme apart from additive square root corrections only. This makes it well suited even for difficult prediction problems, where it does not suffice when the number of errors is minimal to within some factor greater than one. Solomonoff's original bound and the ones presented here complement each other in a useful way.
Characterizations of Monotonic and Dual Monotonic Language Learning
 Information and Computation
, 1995
"... The present paper deals with monotonic and dual monotonic language learning from positive as well as from positive and negative examples. The three notions of monotonicity reflect different formalizations of the requirement that the learner has to produce better and better generalizations when fed m ..."
Abstract

Cited by 20 (7 self)
 Add to MetaCart
The present paper deals with monotonic and dual monotonic language learning from positive as well as from positive and negative examples. The three notions of monotonicity reflect different formalizations of the requirement that the learner has to produce better and better generalizations when fed more and more data on the concept to be learned.
On the Use of Inductive Reasoning in Program Synthesis: Prejudice and Prospects
 IN L. FRIBOURG AND F. TURINI (EDS), JOINT PROC. OF META'94 AND LOPSTR'94
, 1994
"... In this position paper, we give a critical analysis of the deductive and inductive approaches to program synthesis, and of the current research in these fields. From the shortcomings of these approaches and works, we identify future research directions for these fields, as well as a need for coopera ..."
Abstract

Cited by 13 (6 self)
 Add to MetaCart
In this position paper, we give a critical analysis of the deductive and inductive approaches to program synthesis, and of the current research in these fields. From the shortcomings of these approaches and works, we identify future research directions for these fields, as well as a need for cooperation and crossfertilization between them.
Unsupervised Lexical Learning as Inductive Inference
, 2000
"... To learn a language, the learners must first learn its words, the essential building blocks for utterances. The difficulty in learning words lies in the unavailability of explicit word boundaries in speech input. The learners have to infer lexical items with some innately endowed learning mechanism( ..."
Abstract

Cited by 10 (5 self)
 Add to MetaCart
To learn a language, the learners must first learn its words, the essential building blocks for utterances. The difficulty in learning words lies in the unavailability of explicit word boundaries in speech input. The learners have to infer lexical items with some innately endowed learning mechanism(s) for regularity detection regularities in the speech normally indicate word patterns. With respect to Zipf's leasteffort principle and Chomsky's thoughts on the minimality of grammar for human language, we hypothesise a cognitive mechanism underlying language learning that seeks for the leasteffort representation for input data. Accordingly, lexical learning is to infer the minimalcost representation for the input under the constraint of permissible representation for lexical items. The main theme of this thesis is to examine how far this learning mechanism can go in unsupervised lexical learning from real language data without any predefined (e.g., prosodic and phonotactic) cues, but entirely resting on statistical induction of structural patterns for the most economic representation for the data. We first review
Inductive Characterisation of Database Relations
, 1990
"... The general claims of this paper are twofold: there are challenging problems for Machine Learning in the field of Databases, and the study of these problems leads to a deeper understanding of Machine Learning. To support the first claim, we consider the problem of characterising a database relation ..."
Abstract

Cited by 10 (8 self)
 Add to MetaCart
The general claims of this paper are twofold: there are challenging problems for Machine Learning in the field of Databases, and the study of these problems leads to a deeper understanding of Machine Learning. To support the first claim, we consider the problem of characterising a database relation in terms of highlevel properties, i.e. attribute dependencies. The problem is reformulated to reveal its inductive nature. To support the second claim, we show that the problems presented here do not fit well into the current framework for inductive learning, and we discuss the outline of a more general theory of inductive learning. KEYWORDS: Relational model, attribute dependencies, inductive learning, theoretical analysis. Contents 1. Introduction.......................................................................................................................1 2. Characterising a database relation..........................................................................................
Training Sequences
"... this paper initiates a study in which it is demonstrated that certain concepts (represented by functions) can be learned, but only in the event that certain relevant subconcepts (also represented by functions) have been previously learned. In other words, the Soar project presents empirical evidence ..."
Abstract

Cited by 8 (1 self)
 Add to MetaCart
this paper initiates a study in which it is demonstrated that certain concepts (represented by functions) can be learned, but only in the event that certain relevant subconcepts (also represented by functions) have been previously learned. In other words, the Soar project presents empirical evidence that learning how to learn is viable for computers and this paper proves that doing so is the only way possible for computers to make certain inferences.