Results 1  10
of
27
Operations for Learning with Graphical Models
 Journal of Artificial Intelligence Research
, 1994
"... This paper is a multidisciplinary review of empirical, statistical learning from a graphical model perspective. Wellknown examples of graphical models include Bayesian networks, directed graphs representing a Markov chain, and undirected networks representing a Markov field. These graphical models ..."
Abstract

Cited by 249 (12 self)
 Add to MetaCart
This paper is a multidisciplinary review of empirical, statistical learning from a graphical model perspective. Wellknown examples of graphical models include Bayesian networks, directed graphs representing a Markov chain, and undirected networks representing a Markov field. These graphical models are extended to model data analysis and empirical learning using the notation of plates. Graphical operations for simplifying and manipulating a problem are provided including decomposition, differentiation, and the manipulation of probability models from the exponential family. Two standard algorithm schemas for learning are reviewed in a graphical framework: Gibbs sampling and the expectation maximization algorithm. Using these operations and schemas, some popular algorithms can be synthesized from their graphical specification. This includes versions of linear regression, techniques for feedforward networks, and learning Gaussian and discrete Bayesian networks from data. The paper conclu...
Minimum Message Length and Kolmogorov Complexity
 Computer Journal
, 1999
"... this paper is to describe some of the relationships among the different streams and to try to clarify some of the important differences in their assumptions and development. Other studies mentioning the relationships appear in [1, Section IV, pp. 10381039], [2, sections 5.2, 5.5] and [3, p. 465] ..."
Abstract

Cited by 104 (25 self)
 Add to MetaCart
this paper is to describe some of the relationships among the different streams and to try to clarify some of the important differences in their assumptions and development. Other studies mentioning the relationships appear in [1, Section IV, pp. 10381039], [2, sections 5.2, 5.5] and [3, p. 465]
BottomUp Induction of Oblivious ReadOnce Decision Graphs
, 1994
"... . We investigate the use of oblivious, readonce decision graphs as structures for representing concepts over discrete domains, and present a bottomup, hillclimbing algorithm for inferring these structures from labelled instances. The algorithm is robust with respect to irrelevant attributes, and ..."
Abstract

Cited by 45 (8 self)
 Add to MetaCart
. We investigate the use of oblivious, readonce decision graphs as structures for representing concepts over discrete domains, and present a bottomup, hillclimbing algorithm for inferring these structures from labelled instances. The algorithm is robust with respect to irrelevant attributes, and experimental results show that it performs well on problems considered difficult for symbolic induction methods, such as the Monk's problems and parity. 1 Introduction Top down induction of decision trees [25, 24, 20] has been one of the principal induction methods for symbolic, supervised learning. The tree structure, which is used for representing the hypothesized target concept, suffers from some wellknown problems, most notably the replication problem and the fragmentation problem [23]. The replication problem forces duplication of subtrees in disjunctive concepts, such as (A B) (C D); the fragmentation problem causes partitioning of the data into fragments, when a higharity attrib...
The skstrings method for inferring PFSA
 In Proceedings of the
, 1997
"... We describe a simple, fast and easy to implement recursive algorithm with four alternate intuitive heuristics for inferring Probabilistic Finite State Automata. The algorithm is an extension for stochastic machines of the ktails method introduced in 1972 by Biermann and Feldman for nonstochastic m ..."
Abstract

Cited by 31 (2 self)
 Add to MetaCart
We describe a simple, fast and easy to implement recursive algorithm with four alternate intuitive heuristics for inferring Probabilistic Finite State Automata. The algorithm is an extension for stochastic machines of the ktails method introduced in 1972 by Biermann and Feldman for nonstochastic machines. Experiments comparing the two are done and benchmark results are also presented. It is also shown that skstrings performs better than ktails at least in inferring small automata. Introduction When given a finite number of examples of the behaviour of a probabilistic state determined machine, it is possible to imagine methods by which we can infer its structure. Ideally, we would like to identify the exact automaton which generated the strings. But it is impossible to do this from the behaviour of the machine because more than one nonminimal machine may generate the same language. This paper is concerned not with identifing the generating machine, which is demonstratably impossib...
Image Recognition CAPTCHAs
 In Proceedings of the 7th Information Security Conference (ISC ’04), Springer Lecture Notes in Computer Science
, 2004
"... Abstract. CAPTCHAs are tests that distinguish humans from software robots in an online environment [3, 14, 7]. We propose and implement three CAPTCHAs based on naming images, distinguishing images, and identifying an anomalous image out of a set. Novel contributions include proposals for two new CAP ..."
Abstract

Cited by 27 (3 self)
 Add to MetaCart
Abstract. CAPTCHAs are tests that distinguish humans from software robots in an online environment [3, 14, 7]. We propose and implement three CAPTCHAs based on naming images, distinguishing images, and identifying an anomalous image out of a set. Novel contributions include proposals for two new CAPTCHAs, the first user study on image recognition CAPTCHAs, and a new metric for evaluating CAPTCHAs. 1
Introduction to Minimum Encoding Inference
 DEPT. OF STATISTICS, OPEN UNIVERSITY, WALTON HALL, MILTON
, 1994
"... This paper examines the minimumencoding approaches to inference, Minimum Message Length (MML) and Minimum Description Length (MDL). This paper was written with the objective of providing an introduction to this area for statisticians. We describe coding techniques for data, and examine how these tec ..."
Abstract

Cited by 23 (4 self)
 Add to MetaCart
This paper examines the minimumencoding approaches to inference, Minimum Message Length (MML) and Minimum Description Length (MDL). This paper was written with the objective of providing an introduction to this area for statisticians. We describe coding techniques for data, and examine how these techniques can be applied to perform inference and model selection.
Transforming Rules and Trees into Comprehensible Knowledge Structures
, 1996
"... The problem of transforming the knowledge bases of expert systems using induced rules or decision trees into comprehensible knowledge structures is addressed. A knowledge structure is developed that generalizes and subsumes production rules, decision trees, and rules with exceptions. It gives rise t ..."
Abstract

Cited by 22 (5 self)
 Add to MetaCart
The problem of transforming the knowledge bases of expert systems using induced rules or decision trees into comprehensible knowledge structures is addressed. A knowledge structure is developed that generalizes and subsumes production rules, decision trees, and rules with exceptions. It gives rise to a natural complexity measure that allows them to be understood, analyzed and compared on a uniform basis. The structure is a directed acyclic graph with the semantics that nodes are premises, some of which have attached conclusions, and the arcs are inheritance links with disjunctive multiple inheritance. A detailed example is given of the generation of a range of such structures of equivalent performance for a simple problem, and the complexity measure of a particular structure is shown to relate to its perceived complexity. The simplest structures are generated by an algorithm that factors common subpremises from the premises of rules. A more complex example of a chess dataset is used t...
Using the Minimum Description Length Principle to Infer Reduced Ordered Decision Graphs
 Machine Learning
, 1996
"... . We propose an algorithm for the inference of decision graphs from a set of labeled instances. In particular, we propose to infer decision graphs where the variables can only be tested in accordance with a given order and no redundant nodes exist. This type of graphs, reduced ordered decision graph ..."
Abstract

Cited by 11 (1 self)
 Add to MetaCart
. We propose an algorithm for the inference of decision graphs from a set of labeled instances. In particular, we propose to infer decision graphs where the variables can only be tested in accordance with a given order and no redundant nodes exist. This type of graphs, reduced ordered decision graphs, can be used as canonical representations of Boolean functions and can be manipulated using algorithms developed for that purpose. This work proposes a local optimization algorithm that generates compact decision graphs by performing local changes in an existing graph until a minimum is reached. The algorithm uses Rissanen's minimum description length principle to control the tradeoff between accuracy in the training set and complexity of the description. Techniques for the selection of the initial decision graph and for the selection of an appropriate ordering of the variables are also presented. Experimental results obtained using this algorithm in two sets of examples are presented and ...
Inferring Reduced Ordered Decision Graphs of Minimal Description Length
 PROCEEDINGS OF THE TWELFTH INTERNATIONAL CONFERENCE ON MACHINE LEARNING
, 1994
"... This work describes an approach for the inference of reduced ordered decision graphs from training sets. Reduced ordered decision graphs (RODGs) are graphs where the variables can only be tested in accordance with a prespecified order and no redundant nodes exist. RODGs have several interesting pro ..."
Abstract

Cited by 10 (1 self)
 Add to MetaCart
This work describes an approach for the inference of reduced ordered decision graphs from training sets. Reduced ordered decision graphs (RODGs) are graphs where the variables can only be tested in accordance with a prespecified order and no redundant nodes exist. RODGs have several interesting properties that has made them the representation of choice for the manipulation of Boolean functions in the logic synthesis community. We derive a RODG representation of the function implemented by a decision tree. This decision tree can be obtained from a training set using any one of the different algorithms proposed to date. This RODG is then used as the starting point for an algorithm that derives another RODG of minimal description length. The reduction in complexity is obtained by performing incremental changes in the RODG. By using ordered decision diagrams, the task of identifying common subgraphs is made much simpler than the identification of common subtrees in a decision tree. Ordered decision graphs require that a variable ordering be specified in advance. The algorithm that derives such an ordering is based on a reordering algorithm commonly used that finds a locally optimal ordering by swapping the order of two adjacent variables. These algorithms are tested in a set of examples that are known to be hard to solve using decision trees. The results show that when an effective reduction of the description length is obtained, significant gains in generalization accuracycan be achieved. In all casesthe generalization accuracy of the final RODG was better than the generalization accuracy of the decision tree that was used as the starting point.
Hybrid Decision Tree
, 2002
"... In this paper, a hybrid learning approach named HDT is proposed. HDT simulates human reasoning by using symbolic leaming to do qualitative analysis and using neural leaming to do subsequent quantitative analysis. It generates the trunk of a binary hybrid decision tree according to the binary informa ..."
Abstract

Cited by 6 (2 self)
 Add to MetaCart
In this paper, a hybrid learning approach named HDT is proposed. HDT simulates human reasoning by using symbolic leaming to do qualitative analysis and using neural leaming to do subsequent quantitative analysis. It generates the trunk of a binary hybrid decision tree according to the binary information gain ratio criterion in an instance space defined by only original unordered attributes. If unordered attributes cannot further distinguish training examples falling into a leaf node whose diversity is beyond the diversitythreshold, then the node is marked as a dummy node. After all those dummy nodes are marked, a specific feedforward neural network named Fnqc that is trained in an instance space defined by only original ordered attributes is exploited to accomplish the leaming task. Moreover, this paper distinguishes three kinds of incremental learning tasks. Two incremental leaming procedures designed for exampleincremental leaming with different storage requirements are provided, which enables HDT to deal gracefully with data sets where new data are frequently appended. Also a hypothesisdriven constructive induction mechanism is provided, which enables HDT to generate compact concept descriptions.