Results 11  20
of
22
Using probability trees to compute marginals with imprecise probabilities
 INTERNATIONAL JOURNAL OF APPROXIMATE REASONING
, 2002
"... This paper presents an approximate algorithm to obtain a posteriori intervals of probability, when available information is also given with intervals. The algorithm uses probability trees as a means of representing and computing with the convex sets of ..."
Abstract

Cited by 22 (2 self)
 Add to MetaCart
This paper presents an approximate algorithm to obtain a posteriori intervals of probability, when available information is also given with intervals. The algorithm uses probability trees as a means of representing and computing with the convex sets of
Aggregating Learned Probabilistic Beliefs
, 2001
"... We consider the task of aggregating beliefs of several experts. We assume that these beliefs are represented as probability distributions. We argue that the evaluation of any aggregation technique depends on the semantic context of this task. We propose a framework, in which we assume that nature ge ..."
Abstract

Cited by 14 (0 self)
 Add to MetaCart
We consider the task of aggregating beliefs of several experts. We assume that these beliefs are represented as probability distributions. We argue that the evaluation of any aggregation technique depends on the semantic context of this task. We propose a framework, in which we assume that nature generates samples from a `true' distribution and different experts form their beliefs based on the subsets of the data they have a chance to observe. Naturally, the optimal aggregate distribution would be the one learned from the combined sample sets. Such a formulation leads to a natural way to measure the accuracy of the aggregation mechanism. We show that the wellknown aggregation operator LinOP is ideally suited for that task. We propose a LinOPbased learning algorithm, inspired by the techniques developed for Bayesian learning, which aggregates the experts' distributions represented as Bayesian networks. We show experimentally that this algorithm performs well in practice. 1
Learning Accurate Belief Nets
 Wei Zhou; Department of Computing Science; University of Alberta
, 1999
"... Bayesian belief nets (BNs) are typically used to answer a range of queries, where each answer requires computing the probability of a particular hypothesis given some specified evidence. An effective BNlearning algorithm should, therefore, learn an accurate BN, which returns the correct answers to ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
Bayesian belief nets (BNs) are typically used to answer a range of queries, where each answer requires computing the probability of a particular hypothesis given some specified evidence. An effective BNlearning algorithm should, therefore, learn an accurate BN, which returns the correct answers to these specific queries. This report first motivates this objective, arguing that it makes effective use of the data that is encountered, and that it can be more appropriate than the typical "maximum likelihood" algorithms for learning BNs. We then describe several different learning situations, which differ based on how the query information is presented. Based on our analysis of the inherent complexity of these tasks, we define three algorithms for learning the best CPtables for a given BNstructure, and then demonstrate empirically that these algorithms work effectively. 1 Introduction Many tasks require answering questions; this model applies, for example, to both expert systems that i...
A New Hybrid Method for Bayesian Network Learning With Dependency Constraints
"... Abstract — A Bayes net has qualitative and quantitative aspects: The qualitative aspect is its graphical structure that corresponds to correlations among the variables in the Bayes net. The quantitative aspects are the net parameters. This paper develops a hybrid criterion for learning Bayes net str ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
Abstract — A Bayes net has qualitative and quantitative aspects: The qualitative aspect is its graphical structure that corresponds to correlations among the variables in the Bayes net. The quantitative aspects are the net parameters. This paper develops a hybrid criterion for learning Bayes net structures that is based on both aspects. We combine model selection criteria measuring data fit with correlation information from statistical tests: Given a sample d, search for a structure G that maximizes score(G, d), over the set of structures G that satisfy the dependencies detected in d. We rely on the statistical test only to accept conditional dependencies, not conditional independencies. We show how to adapt local search algorithms to accommodate the observed dependencies. Simulation studies with GES search and the BDeu/BIC scores provide evidence that the additional dependency information leads to Bayes nets that better fit the target model in distribution and structure. I.
unknown title
"... Abstract A serious problem in learning probabilistic models is the presence of hidden variables. Thesevariables are not observed, yet interact with several of the observed variables. As such, they induce seemingly complex dependencies among the latter. In recent years, much attentionhas been devoted ..."
Abstract
 Add to MetaCart
Abstract A serious problem in learning probabilistic models is the presence of hidden variables. Thesevariables are not observed, yet interact with several of the observed variables. As such, they induce seemingly complex dependencies among the latter. In recent years, much attentionhas been devoted to the development of algorithms for learning parameters, and in some cases structure, in the presence of hidden variables. In this paper, we address the relatedproblem of detecting hidden variables that interact with the observed variables. This problem is of interest both for improving our understanding of the domain and as a preliminary stepthat guides the learning procedure towards promising models. A very natural approach is to search for "structural signatures " of hidden variables substructures in the learned networkthat tend to suggest the presence of a hidden variable. We make this basic idea concrete, and show how to integrate it with structuresearch algorithms. We evaluate this method onseveral synthetic and reallife datasets, and show that it performs surprisingly well.
unknown title
"... Probabilistic detection of short events, with application to critical care monitoring We describe an application of probabilistic modeling and inference technology to the problem of analyzing sensor data in the setting of an intensive care unit (ICU). In particular, we consider the arterialline blo ..."
Abstract
 Add to MetaCart
Probabilistic detection of short events, with application to critical care monitoring We describe an application of probabilistic modeling and inference technology to the problem of analyzing sensor data in the setting of an intensive care unit (ICU). In particular, we consider the arterialline blood pressure sensor, which is subject to frequent data artifacts that cause false alarms in the ICU and make the raw data almost useless for automated decision making. The problem is complicated by the fact that the sensor data are acquired at fixed intervals whereas the events causing data artifacts may occur at any time and have durations that may be significantly shorter than the data collection interval. We show that careful modeling of the sensor, combined with a general technique for detecting subinterval events and estimating their duration, enables effective detection of artifacts and accurate estimation of the underlying blood pressure values. 1
How To Use catnet Package
, 2010
"... catnet package implements categorical Bayesian network framework in R. Bayesian networks are graphical statistical models that represent directed dependencies between random variables and thus are able to model causal relationships among these variables. A Bayesian network has two components: Direct ..."
Abstract
 Add to MetaCart
catnet package implements categorical Bayesian network framework in R. Bayesian networks are graphical statistical models that represent directed dependencies between random variables and thus are able to model causal relationships among these variables. A Bayesian network has two components: Directed Acyclic Graph (DAG) with nodes the variables of interest and a probability structure given as
Ranking by Dependence—A Fair Criteria
"... Estimating the dependences between random variables, and ranking them accordingly, is a prevalent problem in machine learning. Pursuing frequentist and informationtheoretic approaches, we first show that the pvalue and the mutual information can fail even in simplistic situations. We then propose ..."
Abstract
 Add to MetaCart
Estimating the dependences between random variables, and ranking them accordingly, is a prevalent problem in machine learning. Pursuing frequentist and informationtheoretic approaches, we first show that the pvalue and the mutual information can fail even in simplistic situations. We then propose two conditions for regularizing an estimator of dependence, which leads to a simple yet effective new measure. We discuss its advantages and compare it to wellestablished modelselection criteria. Apart from that, we derive a simple constraint for regularizing parameter estimates in a graphical model. This results in an analytical approximation for the optimal value of the equivalent sample size, which agrees very well with the more involved Bayesian approach in our experiments. 1
Probabilistic modeling of sensor artifacts in critical care
"... We describe an application of probabilistic modeling and inference technology to the problem of analyzing sensor data in the setting of an intensive care unit (ICU). In particular, we consider the arterialline blood pressure sensor, which is subject to frequent data artifacts that cause false alarm ..."
Abstract
 Add to MetaCart
We describe an application of probabilistic modeling and inference technology to the problem of analyzing sensor data in the setting of an intensive care unit (ICU). In particular, we consider the arterialline blood pressure sensor, which is subject to frequent data artifacts that cause false alarms in the ICU and make the raw data almost useless for automated decision making. The problem is complicated by the fact that the sensor data are acquired at fixed intervals whereas the events causing data artifacts may occur at any time and have durations that may be significantly shorter than the data collection interval. We show that careful modeling of the sensor, combined with a general technique for detecting subinterval events and estimating their duration, enables effective detection of artifacts and accurate estimation of the underlying blood pressure values. 1.