Results 1  10
of
42
Logical hidden markov models
 Journal of Artificial Intelligence Research
, 2006
"... Logical hidden Markov models (LOHMMs) upgrade traditional hidden Markov models to deal with sequences of structured symbols in the form of logical atoms, rather than flat characters. This note formally introduces LOHMMs and presents solutions to the three central inference problems for LOHMMs: evalu ..."
Abstract

Cited by 46 (13 self)
 Add to MetaCart
(Show Context)
Logical hidden Markov models (LOHMMs) upgrade traditional hidden Markov models to deal with sequences of structured symbols in the form of logical atoms, rather than flat characters. This note formally introduces LOHMMs and presents solutions to the three central inference problems for LOHMMs: evaluation, most likely hidden state sequence and parameter estimation. The resulting representation and algorithms are experimentally evaluated on problems from the domain of bioinformatics. 1.
MDLBased ContextFree Graph Grammar Induction
 International Journal of Artificial Intelligence Tools
, 2003
"... We present an algorithm for the inference of contextfree graph grammars from examples. The algorithm builds on an earlier system for frequent substructure discovery, and is biased toward grammars that minimize description length. Grammar features include recursion, variables and relationshi ..."
Abstract

Cited by 28 (5 self)
 Add to MetaCart
(Show Context)
We present an algorithm for the inference of contextfree graph grammars from examples. The algorithm builds on an earlier system for frequent substructure discovery, and is biased toward grammars that minimize description length. Grammar features include recursion, variables and relationships.
Automated AssumeGuarantee Reasoning for Simulation Conformance
 In Proc. of CAV’05, volume 3576 of LNCS
, 2005
"... Abstract. The applicability of assumeguarantee reasoning in practice has been limited since it requires the right assumptions to be constructed manually. In this article, we address the issue of efficiently automating assumeguarantee reasoning for simulation conformance between finite state system ..."
Abstract

Cited by 20 (5 self)
 Add to MetaCart
(Show Context)
Abstract. The applicability of assumeguarantee reasoning in practice has been limited since it requires the right assumptions to be constructed manually. In this article, we address the issue of efficiently automating assumeguarantee reasoning for simulation conformance between finite state systems and specifications. We focus on a noncircular assumeguarantee proof rule, and show that there is a weakest assumption that can be represented canonically by a deterministic tree automata (DTA). We then present an algorithm L T that learns this DTA automatically in an incremental fashion, in time that is polynomial in the number of states in the equivalent minimal DTA. The algorithm assumes a teacher that can answer membership queries pertaining to the language of the unknown DTA, and can also test a conjecture and provide a counter example if the conjecture is false. We show how the teacher and its interaction with L T are implemented in a model checker. We have implemented this framework in the ComFoRT toolkit and we report encouraging results (up to 41 and 14 times improvement in memory and time consumption respectively) on nontrivial benchmarks.
Concept Formation Using Graph Grammars
 Proceedings of the KDD Workshop on MultiRelational Data Mining
, 2002
"... Recognizing the expressive power of graph representation and the ability of certain graph grammars to generalize, we attempt to use graph grammar leaming for concept formation. In this paper we describe our initial progress toward that goal, and focus on how certain graph grammars can be leamed f ..."
Abstract

Cited by 17 (6 self)
 Add to MetaCart
(Show Context)
Recognizing the expressive power of graph representation and the ability of certain graph grammars to generalize, we attempt to use graph grammar leaming for concept formation. In this paper we describe our initial progress toward that goal, and focus on how certain graph grammars can be leamed from examples. We also establish grounds for using graph grammars in machine leaming tasks. Several examples are presented to highlight the validity of the approach.
Probabilistic FiniteState Machines  Part I
"... Probabilistic finitestate machines are used today in a variety of areas in pattern recognition, or in fields to which pattern recognition is linked: computational linguistics, machine learning, time series analysis, circuit testing, computational biology, speech recognition and machine translatio ..."
Abstract

Cited by 16 (1 self)
 Add to MetaCart
Probabilistic finitestate machines are used today in a variety of areas in pattern recognition, or in fields to which pattern recognition is linked: computational linguistics, machine learning, time series analysis, circuit testing, computational biology, speech recognition and machine translation are some of them. In part I of this paper we survey these generative objects and study their definitions and properties. In part II, we will study the relation of probabilistic finitestate automata with other well known devices that generate strings as hidden Markov models and ngrams, and provide theorems, algorithms and properties that represent a current state of the art of these objects.
Probabilistic kTestable Tree Languages
 PROCEEDINGS OF 5TH INTERNATIONAL COLLOQUIUM, ICGI 2000, LISBON (PORTUGAL), VOLUME 1891 OF LECTURE NOTES IN COMPUTER SCIENCE
, 2000
"... In this paper, we present a natural generalization of kgram models for tree stochastic languages based on the ktestable class. In this class of models, frequencies are estimated for a probabilistic regular tree grammar wich is bottomup deterministic. One of the advantages of this approach is ..."
Abstract

Cited by 13 (2 self)
 Add to MetaCart
In this paper, we present a natural generalization of kgram models for tree stochastic languages based on the ktestable class. In this class of models, frequencies are estimated for a probabilistic regular tree grammar wich is bottomup deterministic. One of the advantages of this approach is that the model can be updated in an incremental fashion. This method is an alternative to costly learning algorithms (as insideoutsidebased methods) or algorithms that require larger samples (as many state merging/splitting methods).
Probabilistic FiniteState Machines  Part II
"... Probabilistic finitestate machines are used today in a variety of areas in pattern recognition, or in fields to which pattern recognition is linked. In part I of this paper, we surveyed these objects and studied their properties. In this part II, we study the relations between probabilistic finit ..."
Abstract

Cited by 10 (2 self)
 Add to MetaCart
Probabilistic finitestate machines are used today in a variety of areas in pattern recognition, or in fields to which pattern recognition is linked. In part I of this paper, we surveyed these objects and studied their properties. In this part II, we study the relations between probabilistic finitestate automata and other well known devices that generate strings like hidden Markov models and n grams, and provide theorems, algorithms and properties that represent a current state of the art of these objects.
Smoothing and Compression with Stochastic ktestable Tree Languages ⋆ Abstract
"... In this paper, we describe some techniques to learn probabilistic ktestable tree models, a generalization of the well known kgram models, that can be used to compress or classify structured data. These models are easy to infer from samples and allow for incremental updates. Moreover, as shown here ..."
Abstract

Cited by 6 (4 self)
 Add to MetaCart
In this paper, we describe some techniques to learn probabilistic ktestable tree models, a generalization of the well known kgram models, that can be used to compress or classify structured data. These models are easy to infer from samples and allow for incremental updates. Moreover, as shown here, backingoff schemes can be defined to solve data sparseness, a problem that often arises when using trees to represent the data. These features make them suitable to compress structured data files at a better rate than stringbased methods.
Learning Rational Stochastic Tree Languages
"... Abstract. We consider the problem of learning stochastic tree languages, i.e. probability distributions over a set of trees T(F), from a sample of trees independently drawn according to an unknown target P. We consider the case where the target is a rational stochastic tree language, i.e. it can be ..."
Abstract

Cited by 5 (2 self)
 Add to MetaCart
(Show Context)
Abstract. We consider the problem of learning stochastic tree languages, i.e. probability distributions over a set of trees T(F), from a sample of trees independently drawn according to an unknown target P. We consider the case where the target is a rational stochastic tree language, i.e. it can be computed by a rational tree series or, equivalently, by a multiplicity tree automaton. In this paper, we provide two contributions. First, we show that rational tree series admit a canonical representation with parameters that can be efficiently estimated from samples. Then, we give an inference algorithm that identifies the class of rational stochastic tree languages in the limit with probability one. 1
Learning multiplicity tree automata
 In: Proceedings of the 8th International Colloquium on Grammatical Inference (ICGI’06). Volume 4201 of LNCS
, 2006
"... Abstract. In this paper, we present a theoretical approach for the problem of learning multiplicity tree automata. These automata allows one to define functions which compute a number for each tree. They can be seen as a strict generalization of stochastic tree automata since they allow to define fu ..."
Abstract

Cited by 5 (1 self)
 Add to MetaCart
(Show Context)
Abstract. In this paper, we present a theoretical approach for the problem of learning multiplicity tree automata. These automata allows one to define functions which compute a number for each tree. They can be seen as a strict generalization of stochastic tree automata since they allow to define functions over any field K. A multiplicity automaton admits a support which is a non deterministic automaton. From a grammatical inference point of view, this paper presents a contribution which is original due to the combination of two important aspects. This is the first time, as far as we now, that a learning method focuses on non deterministic tree automata which computes functions over a field. The algorithm proposed in this paper stands in Angluin’s exact model where a learner is allowed to use membership and equivalence queries. We show that this algorithm is polynomial in time in function of the size of the representation.