Results 1 
8 of
8
Learning Stochastic Regular Grammars by Means of a State Merging Method
, 1994
"... We propose a new Mgorithm which allows for the identification of any stochastic deterministic regular language as well as the determination of the probabilities of the strings in the language. The algorithm builds the prefix tree acceptor from the sample set and merges systematically equivaJent stat ..."
Abstract

Cited by 137 (13 self)
 Add to MetaCart
We propose a new Mgorithm which allows for the identification of any stochastic deterministic regular language as well as the determination of the probabilities of the strings in the language. The algorithm builds the prefix tree acceptor from the sample set and merges systematically equivaJent states. Experimentally, it proves very fast a.ad the time needed grows only linearly with the size of the sample set.
Stochastic grammatical inference with Multinomial Tests
, 2002
"... We present a new statistical framework for stochastic grammatical inference algorithms based on a state merging strategy. We propose to use multinomial statistical tests to decide which states should be merged. This approach has three main advantages. First, since it is not based on asymptotic resul ..."
Abstract

Cited by 12 (1 self)
 Add to MetaCart
We present a new statistical framework for stochastic grammatical inference algorithms based on a state merging strategy. We propose to use multinomial statistical tests to decide which states should be merged. This approach has three main advantages. First, since it is not based on asymptotic results, small sample case can be specifically dealt with. Second, all the probabilities associated to a state are included in a single test so that statistical evidence is cumulated. Third, a statistical score is associated to each possible merging operation and can be used for bestfirst strategy. Improvement over classical stochastic grammatical inference algorithm is shown on artificial data.
Unifying Consciousness with Explicit Knowledge
"... In this chapter we establish what it is for something to be implicit or explicit. The approach to implicit knowledge is taken from Dienes and Perner (1999), which relates the implicitexplicit distinction to knowledge representations. What it is for a representation to represent something implicitly ..."
Abstract

Cited by 8 (1 self)
 Add to MetaCart
In this chapter we establish what it is for something to be implicit or explicit. The approach to implicit knowledge is taken from Dienes and Perner (1999), which relates the implicitexplicit distinction to knowledge representations. What it is for a representation to represent something implicitly or explicitly is defined and those concepts are applied to knowledge. Next we will show how maximally explicit knowledge is naturally associated with consciousness. We argue that each step in a hierarchy of explicitness is related to the unity of consciousness and that fully explicit knowledge should be associated with a sense of being part of a unified consciousness. New evidence indicating the extent of people's implicit or explicit knowledge in an implicit learning paradigm will then be presented. This evidence will indicate people can be consistently correct in dealing with a contextfree grammar while lacking any knowledge that they have knowledge. 1.
Improvement of the State Merging Rule on Noisy Data in Probabilistic Grammatical Inference
 10th European Conference on Machine Learning. Number 2837 in LNAI, SpringerVerlag (2003) 169–1180
, 2003
"... In this paper we study the influence of noise in probabilistic grammatical inference. We paradoxically bring out the idea that specialized automata deal better with noisy data than more general ones. We propose then to replace the statistical test of the Alergia algorithm by a more restrictive m ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
In this paper we study the influence of noise in probabilistic grammatical inference. We paradoxically bring out the idea that specialized automata deal better with noisy data than more general ones. We propose then to replace the statistical test of the Alergia algorithm by a more restrictive merging rule based on a test of proportion comparison.
Learning from implicit learning literature: Comment on
 Quarterly Journal of Experimental Psychology
, 2003
"... overlooked one of the most robust conclusions of the experimental studies on implicit learning conducted during the last decade—namely that participants usually learn things that are different from those that the experimenter expected them to learn. We show that the available literature on implicit ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
overlooked one of the most robust conclusions of the experimental studies on implicit learning conducted during the last decade—namely that participants usually learn things that are different from those that the experimenter expected them to learn. We show that the available literature on implicit learning strongly suggests that the improved performance in Shea et al.’s Experiments 1 and 2 (and similar earlier experiments, e.g., Wulf & Schmidt, 1997) was due to the exploitation of regularities in the target pattern different from those on which the postexperimental interview focused. This rules out the conclusions drawn from the failure of this interview to reveal any explicit knowledge about the task structure on the part of the participants. Similarly, because the information about the task structure provided to an instructed group of participants in Shea et al.’s Experiment 2 did not concern the regularities presumably exploited by the standard, socalled implicit, group, Shea et al.’s claim that explicit knowledge may be less effective than implicit knowledge is misleading. Most studies on implicit learning, the process whereby people learn without intent and without being able to clearly articulate what they learn, have focused on a few prototypical laboratory paradigms (for a brief overview: Cleeremans, Destrebecqz, & Boyer, 1998). Needless
Printed in the United States of America The rules versus similarity distinction
"... Abstract: The distinction between rules and similarity is central to our understanding of much of cognitive psychology. Two aspects of existing research have motivated the present work. First, in different cognitive psychology areas we typically see different conceptions of rules and similarity; for ..."
Abstract
 Add to MetaCart
Abstract: The distinction between rules and similarity is central to our understanding of much of cognitive psychology. Two aspects of existing research have motivated the present work. First, in different cognitive psychology areas we typically see different conceptions of rules and similarity; for example, rules in language appear to be of a different kind compared to rules in categorization. Second, rules processes are typically modeled as separate from similarity ones; for example, in a learning experiment, rules and similarity influences would be described on the basis of separate models. In the present article, I assume that the rules versus similarity distinction can be understood in the same way in learning, reasoning, categorization, and language, and that a unified model for rules and similarity is appropriate. A rules process is considered to be a similarity one where only a single or a small subset of an object’s properties are involved. Hence, rules and overall similarity operations are extremes in a single continuum of similarity operations. It is argued that this viewpoint allows adequate coverage of theory and empirical findings in learning, reasoning, categorization, and language, and also a reassessment of the objectives in research on rules versus similarity.
Probabilistic Deterministic Infinite Automata
"... We propose a novel Bayesian nonparametric approach to learning with probabilistic deterministic finite automata (PDFA). We define and develop a sampler for a PDFA with an infinite number of states which we call the probabilistic deterministic infinite automata (PDIA). Posterior predictive inference ..."
Abstract
 Add to MetaCart
We propose a novel Bayesian nonparametric approach to learning with probabilistic deterministic finite automata (PDFA). We define and develop a sampler for a PDFA with an infinite number of states which we call the probabilistic deterministic infinite automata (PDIA). Posterior predictive inference in this model, given a finite training sequence, can be interpreted as averaging over multiple PDFAs of varying structure, where each PDFA is biased towards having few states. We suggest that our method for averaging over PDFAs is a novel approach to predictive distribution smoothing. We test PDIA inference both on PDFA structure learning and on both natural language and DNA data prediction tasks. The results suggest that the PDIA presents an attractive compromise between the computational cost of hidden Markov models and the storage requirements of hierarchically smoothed Markov models. 1
Correction of Uniformly Noisy Distributions to Improve Probabilistic Grammatical Inference Algorithms ∗
, 2009
"... In this paper, we aim at correcting distributions of noisy samples in order to improve the inference of probabilistic automata. Rather than definitively removing corrupted examples before the learning process, we propose a technique, based on statistical estimates and linear regression, for correcti ..."
Abstract
 Add to MetaCart
In this paper, we aim at correcting distributions of noisy samples in order to improve the inference of probabilistic automata. Rather than definitively removing corrupted examples before the learning process, we propose a technique, based on statistical estimates and linear regression, for correcting the probabilistic prefix tree automaton (PPTA). It requires a human expertise to correct only a small sample of data, selected in order to estimate the noise level. This statistical information permits us to automatically correct the whole PPTA and then to infer better models from a generalization point of view. After a theoretical analysis of the noise impact, we present a large experimental study on several datasets.