Results 1  10
of
214,254
Learning Bounded Treewidth Bayesian Networks via Sampling
"... Abstract. Learning Bayesian networks with bounded treewidth has attracted much attention recently, because low treewidth allows exact inference to be performed efficiently. Some existing methods [12, 14] tackle the problem by using ktrees to learn the optimal Bayesian network with treewidth up ..."
Abstract
 Add to MetaCart
Abstract. Learning Bayesian networks with bounded treewidth has attracted much attention recently, because low treewidth allows exact inference to be performed efficiently. Some existing methods [12, 14] tackle the problem by using ktrees to learn the optimal Bayesian network with treewidth
Bayesian Network Classifiers
, 1997
"... Recent work in supervised learning has shown that a surprisingly simple Bayesian classifier with strong assumptions of independence among features, called naive Bayes, is competitive with stateoftheart classifiers such as C4.5. This fact raises the question of whether a classifier with less restr ..."
Abstract

Cited by 788 (23 self)
 Add to MetaCart
restrictive assumptions can perform even better. In this paper we evaluate approaches for inducing classifiers from data, based on the theory of learning Bayesian networks. These networks are factored representations of probability distributions that generalize the naive Bayesian classifier and explicitly
Sparse Bayesian Learning and the Relevance Vector Machine
, 2001
"... This paper introduces a general Bayesian framework for obtaining sparse solutions to regression and classication tasks utilising models linear in the parameters. Although this framework is fully general, we illustrate our approach with a particular specialisation that we denote the `relevance vec ..."
Abstract

Cited by 958 (5 self)
 Add to MetaCart
vector machine' (RVM), a model of identical functional form to the popular and stateoftheart `support vector machine' (SVM). We demonstrate that by exploiting a probabilistic Bayesian learning framework, we can derive accurate prediction models which typically utilise dramatically fewer
Estimating Continuous Distributions in Bayesian Classifiers
 In Proceedings of the Eleventh Conference on Uncertainty in Artificial Intelligence
, 1995
"... When modeling a probability distribution with a Bayesian network, we are faced with the problem of how to handle continuous variables. Most previous work has either solved the problem by discretizing, or assumed that the data are generated by a single Gaussian. In this paper we abandon the normality ..."
Abstract

Cited by 489 (2 self)
 Add to MetaCart
When modeling a probability distribution with a Bayesian network, we are faced with the problem of how to handle continuous variables. Most previous work has either solved the problem by discretizing, or assumed that the data are generated by a single Gaussian. In this paper we abandon
A Bayesian method for the induction of probabilistic networks from data
 MACHINE LEARNING
, 1992
"... This paper presents a Bayesian method for constructing probabilistic networks from databases. In particular, we focus on constructing Bayesian belief networks. Potential applications include computerassisted hypothesis testing, automated scientific discovery, and automated construction of probabili ..."
Abstract

Cited by 1381 (32 self)
 Add to MetaCart
This paper presents a Bayesian method for constructing probabilistic networks from databases. In particular, we focus on constructing Bayesian belief networks. Potential applications include computerassisted hypothesis testing, automated scientific discovery, and automated construction
Learning markov networks: maximum bounded treewidth graphs
 In Proceedings of the 12th ACMSIAM Symposium on Discrete Algorithms
, 2001
"... AbstractMarkov networks are a common class of graphical models used in machine learning. Such models use an undirected graph tocapture dependency information among random variables in a joint probability distribution. Once one has chosen to use a Markovnetwork model, one aims to choose the model tha ..."
Abstract

Cited by 72 (6 self)
 Add to MetaCart
that "best explains " the data that has been observedthis model can then be used tomake predictions about future data. We show that the problem of learning a maximum likelihoodMarkov network given certain observed data can be reduced to the problem of identifying a maximum weight lowtreewidth
Methods and Experiments With Bounded Treewidth Markov Networks
"... Markov trees generalize naturally to bounded treewidth Markov networks, on which exact computations can still be done efficiently. However, learning the maximum likelihood Markov network with treewidth greater than 1 is NPhard, so we discuss a few algorithms for approximating the optimal Markov n ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Markov trees generalize naturally to bounded treewidth Markov networks, on which exact computations can still be done efficiently. However, learning the maximum likelihood Markov network with treewidth greater than 1 is NPhard, so we discuss a few algorithms for approximating the optimal Markov
Learning probabilistic relational models
 In IJCAI
, 1999
"... A large portion of realworld data is stored in commercial relational database systems. In contrast, most statistical learning methods work only with "flat " data representations. Thus, to apply these methods, we are forced to convert our data into a flat form, thereby losing much ..."
Abstract

Cited by 619 (31 self)
 Add to MetaCart
objects. Although PRMs are significantly more expressive than standard models, such as Bayesian networks, we show how to extend wellknown statistical methods for learning Bayesian networks to learn these models. We describe both parameter estimation and structure learning — the automatic induction
Gaussian processes for machine learning
 in: Adaptive Computation and Machine Learning
, 2006
"... Abstract. We give a basic introduction to Gaussian Process regression models. We focus on understanding the role of the stochastic process and how it is used to define a distribution over functions. We present the simple equations for incorporating training data and examine how to learn the hyperpar ..."
Abstract

Cited by 631 (2 self)
 Add to MetaCart
Abstract. We give a basic introduction to Gaussian Process regression models. We focus on understanding the role of the stochastic process and how it is used to define a distribution over functions. We present the simple equations for incorporating training data and examine how to learn
Ensemble Methods in Machine Learning
 MULTIPLE CLASSIFIER SYSTEMS, LBCS1857
, 2000
"... Ensemble methods are learning algorithms that construct a set of classifiers and then classify new data points by taking a (weighted) vote of their predictions. The original ensemble method is Bayesian averaging, but more recent algorithms include errorcorrecting output coding, Bagging, and boostin ..."
Abstract

Cited by 607 (3 self)
 Add to MetaCart
Ensemble methods are learning algorithms that construct a set of classifiers and then classify new data points by taking a (weighted) vote of their predictions. The original ensemble method is Bayesian averaging, but more recent algorithms include errorcorrecting output coding, Bagging
Results 1  10
of
214,254