Results 11  20
of
28
Using probability trees to compute marginals with imprecise probabilities
 INTERNATIONAL JOURNAL OF APPROXIMATE REASONING
, 2002
"... This paper presents an approximate algorithm to obtain a posteriori intervals of probability, when available information is also given with intervals. The algorithm uses probability trees as a means of representing and computing with the convex sets of ..."
Abstract

Cited by 26 (2 self)
 Add to MetaCart
This paper presents an approximate algorithm to obtain a posteriori intervals of probability, when available information is also given with intervals. The algorithm uses probability trees as a means of representing and computing with the convex sets of
Learning the dimensionality of hidden variables
 In UAI ’01
, 2001
"... A serious problem in learning probabilistic models is the presence of hidden variables. These variables are not observed, yet interact with several of the observed variables. Detecting hidden variables poses two problems: determining the relations to other variables in the model and determining the ..."
Abstract

Cited by 25 (3 self)
 Add to MetaCart
(Show Context)
A serious problem in learning probabilistic models is the presence of hidden variables. These variables are not observed, yet interact with several of the observed variables. Detecting hidden variables poses two problems: determining the relations to other variables in the model and determining the number of states of the hidden variable. In this paper, we address the latter problem in the context of Bayesian networks. We describe an approach that utilizes a scorebased agglomerative stateclustering. As we show, this approach allows us to efficiently evaluate models with a range of cardinalities for the hidden variable. We show how to extend this procedure to deal with multiple interacting hidden variables. We demonstrate the effectiveness of this approach by evaluating it on synthetic and reallife data. We show that our approach learns models with hidden variables that generalize better and have better structure than previous approaches. 1
Aggregating Learned Probabilistic Beliefs
, 2001
"... We consider the task of aggregating beliefs of several experts. We assume that these beliefs are represented as probability distributions. We argue that the evaluation of any aggregation technique depends on the semantic context of this task. We propose a framework, in which we assume that nature ge ..."
Abstract

Cited by 15 (0 self)
 Add to MetaCart
(Show Context)
We consider the task of aggregating beliefs of several experts. We assume that these beliefs are represented as probability distributions. We argue that the evaluation of any aggregation technique depends on the semantic context of this task. We propose a framework, in which we assume that nature generates samples from a `true' distribution and different experts form their beliefs based on the subsets of the data they have a chance to observe. Naturally, the optimal aggregate distribution would be the one learned from the combined sample sets. Such a formulation leads to a natural way to measure the accuracy of the aggregation mechanism. We show that the wellknown aggregation operator LinOP is ideally suited for that task. We propose a LinOPbased learning algorithm, inspired by the techniques developed for Bayesian learning, which aggregates the experts' distributions represented as Bayesian networks. We show experimentally that this algorithm performs well in practice. 1
A New Hybrid Method for Bayesian Network Learning With Dependency Constraints
"... Abstract — A Bayes net has qualitative and quantitative aspects: The qualitative aspect is its graphical structure that corresponds to correlations among the variables in the Bayes net. The quantitative aspects are the net parameters. This paper develops a hybrid criterion for learning Bayes net str ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
(Show Context)
Abstract — A Bayes net has qualitative and quantitative aspects: The qualitative aspect is its graphical structure that corresponds to correlations among the variables in the Bayes net. The quantitative aspects are the net parameters. This paper develops a hybrid criterion for learning Bayes net structures that is based on both aspects. We combine model selection criteria measuring data fit with correlation information from statistical tests: Given a sample d, search for a structure G that maximizes score(G, d), over the set of structures G that satisfy the dependencies detected in d. We rely on the statistical test only to accept conditional dependencies, not conditional independencies. We show how to adapt local search algorithms to accommodate the observed dependencies. Simulation studies with GES search and the BDeu/BIC scores provide evidence that the additional dependency information leads to Bayes nets that better fit the target model in distribution and structure. I.
Learning Accurate Belief Nets
 Wei Zhou; Department of Computing Science; University of Alberta
, 1999
"... Bayesian belief nets (BNs) are typically used to answer a range of queries, where each answer requires computing the probability of a particular hypothesis given some specified evidence. An effective BNlearning algorithm should, therefore, learn an accurate BN, which returns the correct answers to ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
(Show Context)
Bayesian belief nets (BNs) are typically used to answer a range of queries, where each answer requires computing the probability of a particular hypothesis given some specified evidence. An effective BNlearning algorithm should, therefore, learn an accurate BN, which returns the correct answers to these specific queries. This report first motivates this objective, arguing that it makes effective use of the data that is encountered, and that it can be more appropriate than the typical "maximum likelihood" algorithms for learning BNs. We then describe several different learning situations, which differ based on how the query information is presented. Based on our analysis of the inherent complexity of these tasks, we define three algorithms for learning the best CPtables for a given BNstructure, and then demonstrate empirically that these algorithms work effectively. 1 Introduction Many tasks require answering questions; this model applies, for example, to both expert systems that i...
Annealed Importance Sampling for Structure Learning in Bayesian Networks
"... We present a new sampling approach to Bayesian learning of the Bayesian network structure. Like some earlier sampling methods, we sample linear orders on nodes rather than directed acyclic graphs (DAGs). The key difference is that we replace the usual Markov chain Monte Carlo (MCMC) method by the me ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
We present a new sampling approach to Bayesian learning of the Bayesian network structure. Like some earlier sampling methods, we sample linear orders on nodes rather than directed acyclic graphs (DAGs). The key difference is that we replace the usual Markov chain Monte Carlo (MCMC) method by the method of annealed importance sampling (AIS). We show that AIS is not only competitive to MCMC in exploring the posterior, but also superior to MCMC in two ways: it enables easy and efficient parallelization, due to the independence of the samples, and lowerbounding of the marginal likelihood of the model with good probabilistic guarantees. We also provide a principled way to correct the bias due to orderbased sampling, by implementing a fast algorithm for counting the linear extensions of a given partial order.
Ranking by Dependence—A Fair Criteria
"... Estimating the dependences between random variables, and ranking them accordingly, is a prevalent problem in machine learning. Pursuing frequentist and informationtheoretic approaches, we first show that the pvalue and the mutual information can fail even in simplistic situations. We then propose ..."
Abstract
 Add to MetaCart
(Show Context)
Estimating the dependences between random variables, and ranking them accordingly, is a prevalent problem in machine learning. Pursuing frequentist and informationtheoretic approaches, we first show that the pvalue and the mutual information can fail even in simplistic situations. We then propose two conditions for regularizing an estimator of dependence, which leads to a simple yet effective new measure. We discuss its advantages and compare it to wellestablished modelselection criteria. Apart from that, we derive a simple constraint for regularizing parameter estimates in a graphical model. This results in an analytical approximation for the optimal value of the equivalent sample size, which agrees very well with the more involved Bayesian approach in our experiments. 1
How To Use catnet Package
, 2010
"... The catnet package implements categorical Bayesian network framework in R. Bayesian networks are graphical statistical models that represent directed dependencies between random variables and thus are able to model causal relationships among these variables. A Bayesian network has two components: Di ..."
Abstract
 Add to MetaCart
The catnet package implements categorical Bayesian network framework in R. Bayesian networks are graphical statistical models that represent directed dependencies between random variables and thus are able to model causal relationships among these variables. A Bayesian network has two components: Directed Acyclic Graph (DAG) with nodes the variables of interest and a probability structure given as
unknown title
"... Probabilistic detection of short events, with application to critical care monitoring We describe an application of probabilistic modeling and inference technology to the problem of analyzing sensor data in the setting of an intensive care unit (ICU). In particular, we consider the arterialline blo ..."
Abstract
 Add to MetaCart
(Show Context)
Probabilistic detection of short events, with application to critical care monitoring We describe an application of probabilistic modeling and inference technology to the problem of analyzing sensor data in the setting of an intensive care unit (ICU). In particular, we consider the arterialline blood pressure sensor, which is subject to frequent data artifacts that cause false alarms in the ICU and make the raw data almost useless for automated decision making. The problem is complicated by the fact that the sensor data are acquired at fixed intervals whereas the events causing data artifacts may occur at any time and have durations that may be significantly shorter than the data collection interval. We show that careful modeling of the sensor, combined with a general technique for detecting subinterval events and estimating their duration, enables effective detection of artifacts and accurate estimation of the underlying blood pressure values. 1
Probabilistic modeling of sensor artifacts in critical care
"... We describe an application of probabilistic modeling and inference technology to the problem of analyzing sensor data in the setting of an intensive care unit (ICU). In particular, we consider the arterialline blood pressure sensor, which is subject to frequent data artifacts that cause false alarm ..."
Abstract
 Add to MetaCart
(Show Context)
We describe an application of probabilistic modeling and inference technology to the problem of analyzing sensor data in the setting of an intensive care unit (ICU). In particular, we consider the arterialline blood pressure sensor, which is subject to frequent data artifacts that cause false alarms in the ICU and make the raw data almost useless for automated decision making. The problem is complicated by the fact that the sensor data are acquired at fixed intervals whereas the events causing data artifacts may occur at any time and have durations that may be significantly shorter than the data collection interval. We show that careful modeling of the sensor, combined with a general technique for detecting subinterval events and estimating their duration, enables effective detection of artifacts and accurate estimation of the underlying blood pressure values. 1.