Results 1  10
of
82,383
Unsupervised Learning of NoisyOr Bayesian Networks
"... This paper considers the problem of learning the parameters in Bayesian networks of discrete variables with known structure and hidden variables. Previous approaches in these settings typically use expectation maximization; when the network has high treewidth, the required expectations might be appr ..."
Abstract

Cited by 5 (3 self)
 Add to MetaCart
family of bipartite noisyor Bayesian networks. In our experimental results, we demonstrate an application of our algorithm to learning QMRDT, a large Bayesian network used for medical diagnosis. We show that it is possible to fully learn the parameters of QMRDT even when only the findings are observed
NoisyOr Classifier
, 2003
"... We discuss application of a well known simple Bayesian network model  the noisyor model  to classification with large number of attributes. ..."
Abstract
 Add to MetaCart
We discuss application of a well known simple Bayesian network model  the noisyor model  to classification with large number of attributes.
Noisyor classifier ∗
"... We discuss an application of a family of Bayesian network models – known as models of independence of causal influence (ICI) – to classification tasks with large numbers of attributes. An example of such a task is categorization of text documents, where attributes are single words from the document ..."
Abstract
 Add to MetaCart
the documents. The key that enabled application of the ICI models is their compact representation using a hidden variable. We address the issue of learning these classifiers by an computationally efficient implementation of the EMalgorithm. We pay special attention to the noisyor model – probably the best
Noisyor classifier ∗
"... We discuss an application of a family of Bayesian network models – known as models of independence of causal influence (ICI) – to classification tasks with large numbers of attributes. An example of such a task is categorization of text documents, where attributes are single words from the document ..."
Abstract
 Add to MetaCart
the documents. The key that enabled application of the ICI models is their compact representation using a hidden variable. We address the issue of learning these classifiers by an computationally efficient implementation of the EMalgorithm. We pay special attention to the noisyor model – probably the best
The Imprecise NoisyOR Gate
"... Abstract—The noisyOR gate is an important tool for a compact elicitation of the conditional probabilities of a Bayesian network. An impreciseprobabilistic version of this model, where sets instead of single distributions are used to model uncertainty about the inhibition of the causal factors, is ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Abstract—The noisyOR gate is an important tool for a compact elicitation of the conditional probabilities of a Bayesian network. An impreciseprobabilistic version of this model, where sets instead of single distributions are used to model uncertainty about the inhibition of the causal factors
Improving generalization with active learning
 Machine Learning
, 1994
"... Abstract. Active learning differs from "learning from examples " in that the learning algorithm assumes at least some control over what part of the input domain it receives information about. In some situations, active learning is provably more powerful than learning from examples ..."
Abstract

Cited by 539 (1 self)
 Add to MetaCart
Abstract. Active learning differs from "learning from examples " in that the learning algorithm assumes at least some control over what part of the input domain it receives information about. In some situations, active learning is provably more powerful than learning from examples
Discovering Hidden Variables in NoisyOr Networks using Quartet Tests
"... We give a polynomialtime algorithm for provably learning the structure and parameters of bipartite noisyor Bayesian networks of binary variables where the top layer is completely hidden. Unsupervised learning of these models is a form of discrete factor analysis, enabling the discovery of hidden ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
We give a polynomialtime algorithm for provably learning the structure and parameters of bipartite noisyor Bayesian networks of binary variables where the top layer is completely hidden. Unsupervised learning of these models is a form of discrete factor analysis, enabling the discovery of hidden
Instancebased learning algorithms
 Machine Learning
, 1991
"... Abstract. Storing and using specific instances improves the performance of several supervised learning algorithms. These include algorithms that learn decision trees, classification rules, and distributed networks. However, no investigation has analyzed algorithms that use only specific instances to ..."
Abstract

Cited by 1359 (18 self)
 Add to MetaCart
Abstract. Storing and using specific instances improves the performance of several supervised learning algorithms. These include algorithms that learn decision trees, classification rules, and distributed networks. However, no investigation has analyzed algorithms that use only specific instances
Bayesian Network Classifiers
, 1997
"... Recent work in supervised learning has shown that a surprisingly simple Bayesian classifier with strong assumptions of independence among features, called naive Bayes, is competitive with stateoftheart classifiers such as C4.5. This fact raises the question of whether a classifier with less restr ..."
Abstract

Cited by 788 (23 self)
 Add to MetaCart
restrictive assumptions can perform even better. In this paper we evaluate approaches for inducing classifiers from data, based on the theory of learning Bayesian networks. These networks are factored representations of probability distributions that generalize the naive Bayesian classifier and explicitly
A learning algorithm for Boltzmann machines
 Cognitive Science
, 1985
"... The computotionol power of massively parallel networks of simple processing elements resides in the communication bandwidth provided by the hardware connections between elements. These connections con allow a significant fraction of the knowledge of the system to be applied to an instance of a probl ..."
Abstract

Cited by 586 (13 self)
 Add to MetaCart
The computotionol power of massively parallel networks of simple processing elements resides in the communication bandwidth provided by the hardware connections between elements. These connections con allow a significant fraction of the knowledge of the system to be applied to an instance of a
Results 1  10
of
82,383