Results 1 
9 of
9
Learning Models of Intelligent Agents
, 1996
"... Agents that operate in a multiagent system need an efficient strategy to handle their encounters with other agents involved. Searching for an optimal interactive strategy is a hard problem because it depends mostly on the behavior of the others. In this work, interaction among agents is represented ..."
Abstract

Cited by 97 (2 self)
 Add to MetaCart
Agents that operate in a multiagent system need an efficient strategy to handle their encounters with other agents involved. Searching for an optimal interactive strategy is a hard problem because it depends mostly on the behavior of the others. In this work, interaction among agents is represented as a repeated twoplayer game, where the agents' objective is to look for a strategy that maximizes their expected sum of rewards in the game. We assume that agents' strategies can be modeled as finite automata. A modelbased approach is presented as a possible method for learning an effective interactive strategy. First, we describe how an agent should find an optimal strategy against a given model. Second, we present an unsupervised algorithm that infers a model of the opponent's automaton from its input/output behavior. A set of experiments that show the potential merit of the algorithm is reported as well. Introduction In recent years, a major research effort has been invested in desi...
Probabilistic FiniteState Machines  Part I
"... Probabilistic finitestate machines are used today in a variety of areas in pattern recognition, or in fields to which pattern recognition is linked: computational linguistics, machine learning, time series analysis, circuit testing, computational biology, speech recognition and machine translatio ..."
Abstract

Cited by 27 (1 self)
 Add to MetaCart
Probabilistic finitestate machines are used today in a variety of areas in pattern recognition, or in fields to which pattern recognition is linked: computational linguistics, machine learning, time series analysis, circuit testing, computational biology, speech recognition and machine translation are some of them. In part I of this paper we survey these generative objects and study their definitions and properties. In part II, we will study the relation of probabilistic finitestate automata with other well known devices that generate strings as hidden Markov models and ngrams, and provide theorems, algorithms and properties that represent a current state of the art of these objects.
Learning From a Consistently Ignorant Teacher
, 1994
"... One view of computational learning theory is that of a learner acquiring the knowledge of a teacher. We introduce a formal model of learning capturing the idea that teachers may have gaps in their knowledge. In particular, we consider learning from a teacher who labels examples "+" (a p ..."
Abstract

Cited by 24 (8 self)
 Add to MetaCart
One view of computational learning theory is that of a learner acquiring the knowledge of a teacher. We introduce a formal model of learning capturing the idea that teachers may have gaps in their knowledge. In particular, we consider learning from a teacher who labels examples "+" (a positive instance of the concept being learned), "\Gamma" (a negative instance of the concept being learned), and "?" (an instance with unknown classification), in such a way that knowledge of the concept class and all the positive and negative examples is not sufficient to determine the labelling of any of the examples labelled with "?". The goal of the learner is not to compensate for the ignorance of the teacher by attempting to infer "+" or "\Gamma" labels for the examples labelled with "?", but is rather to learn (an approximation to) the ternary labelling presented by the teacher. Thus, the goal of the learner is still to acquire the knowledge of the teacher, but now the learner must also ...
Links
, 2004
"... between probabilistic automata and hidden Markov models: probability distributions, learning models and induction algorithms P. Dupont a, ∗ , F. Denis b,Y. Esposito b ..."
Abstract
 Add to MetaCart
(Show Context)
between probabilistic automata and hidden Markov models: probability distributions, learning models and induction algorithms P. Dupont a, ∗ , F. Denis b,Y. Esposito b
Learning Models of Intelligent Agents
"... Agents that operate in a multiagent system need an efficient strategy to handle their encounters with other agents involved. Searching for an optimal interactive strategy is a hard problem because it depends mostly on the behavior of the others. In this work, interaction among agents is represented ..."
Abstract
 Add to MetaCart
Agents that operate in a multiagent system need an efficient strategy to handle their encounters with other agents involved. Searching for an optimal interactive strategy is a hard problem because it depends mostly on the behavior of the others. In this work, interaction among agents is represented as a repeated twoplayer game, where the agents ’ objective is to look for a strategy that maximizes their expected sum of rewards in the game. We assume that agents ’ strategies can be modeled as finite automata. A modelbased approach is presented as a possible method for learning an effective interactive strategy. First, we describe how an agent should find an optimal strategy against a given model. Second, we present an unsupervised algorithm that infers a model of the opponent’s automaton from its input/output behavior. A set of experiments that show the potential merit of the algorithm is reported as well.
Learning with Errors in Answers to Membership Queries
"... We study the learning models defined in [AKST97]: Learning with equivalence and limited membership queries and learning with equivalence and malicious membership queries. We show that if a class of concepts that is closed under projection is learnable in polynomial time using equivalence and (standa ..."
Abstract
 Add to MetaCart
We study the learning models defined in [AKST97]: Learning with equivalence and limited membership queries and learning with equivalence and malicious membership queries. We show that if a class of concepts that is closed under projection is learnable in polynomial time using equivalence and (standard) membership queries then it is learnable in polynomial time in the above models. This closes the open problems in [AKST97]. Our algorithm can also handle errors in the equivalence queries. 1
Learning of DFAs and subsequential transducers using membership, translation and strong equivalence queries
, 2010
"... ..."
www.elsevier.com/locate/jcss Learning with errors in answers to membership queries ✩
, 2007
"... We study the learning models defined in [D. Angluin, M. Krikis, R.H. Sloan, G. Turán, Malicious omissions and errors in answering to membership queries, Machine Learning 28 (2–3) (1997) 211–255]: Learning with equivalence and limited membership queries and learning with equivalence and malicious me ..."
Abstract
 Add to MetaCart
(Show Context)
We study the learning models defined in [D. Angluin, M. Krikis, R.H. Sloan, G. Turán, Malicious omissions and errors in answering to membership queries, Machine Learning 28 (2–3) (1997) 211–255]: Learning with equivalence and limited membership queries and learning with equivalence and malicious membership queries. We show that if a class of concepts that is closed under projection is learnable in polynomial time using equivalence and (standard) membership queries then it is learnable in polynomial time in the above models. This closes the open problems in [D. Angluin, M. Krikis, R.H. Sloan, G. Turán, Malicious omissions and errors in answering to membership queries, Machine
Advances in Learning Formal Languages
, 2011
"... We present an overview in the advances related to the learning of formal languages i.e. development in the grammatical inference research. The problem of learning correct grammars for the unknown languages is known as grammatical inference. It is considered a main subject of inductive inference, and ..."
Abstract
 Add to MetaCart
(Show Context)
We present an overview in the advances related to the learning of formal languages i.e. development in the grammatical inference research. The problem of learning correct grammars for the unknown languages is known as grammatical inference. It is considered a main subject of inductive inference, and grammars are important representations to be investigated in machine learning from both theoretical and practical points of view. Application area of grammatical inference is increasing day by day, and it is still required to find a task where grammatical inference models have done much better than other machine learning or pattern recognition programs. However, it is known that making research in this area is a computationally hard problem. This paper mainly explores the area, its applications, various learning paradigms, the case of contextfree grammars, challenges, recent trends etc., and cites the important literature on these.