Results 1 
6 of
6
Foundations for Bayesian networks
, 2001
"... Bayesian networks are normally given one of two types of foundations: they are either treated purely formally as an abstract way of representing probability functions, or they are interpreted, with some causal interpretation given to the graph in a network and some standard interpretation of probabi ..."
Abstract

Cited by 11 (7 self)
 Add to MetaCart
Bayesian networks are normally given one of two types of foundations: they are either treated purely formally as an abstract way of representing probability functions, or they are interpreted, with some causal interpretation given to the graph in a network and some standard interpretation of probability given to the probabilities specified in the network. In this chapter I argue that current foundations are problematic, and put forward new foundations which involve aspects of both the interpreted and the formal approaches. One standard approach is to interpret a Bayesian network objectively: the graph in a Bayesian network represents causality in the world and the specified probabilities are objective, empirical probabilities. Such an interpretation founders when the Bayesian network independence assumption (often called the causal Markov condition) fails to hold. In §2 I catalogue the occasions when the independence assumption fails, and show that such failures are pervasive. Next, in §3, I show that even where the independence assumption does hold objectively, an agent’s causal knowledge is unlikely to satisfy the assumption with respect to her subjective probabilities, and that slight differences between an agent’s subjective Bayesian network and an objective Bayesian network can lead to large differences between probability distributions determined by these networks. To overcome these difficulties I put forward logical Bayesian foundations in §5. I show that if the graph and probability specification in a Bayesian network are thought of as an agent’s background knowledge, then the agent is most rational if she adopts the probability distribution determined by the
A Probabilistic Approach to Diagnosis
 Proceedings of the Eleventh International Workshop on Principles of Diagnosis (DX00
, 2000
"... This paper addresses the foundations of diagnostic reasoning, in particular the viability of a probabilistic approach. One might be reluctant to adopt such an approach for one of two reasons: one may suppose that the probabilistic approach is inappropriate or that it is impractical to implement. I s ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
This paper addresses the foundations of diagnostic reasoning, in particular the viability of a probabilistic approach. One might be reluctant to adopt such an approach for one of two reasons: one may suppose that the probabilistic approach is inappropriate or that it is impractical to implement. I shall attempt to overcome any such doubts and to argue that on the contrary the probabilistic method is extremely promising.
Discovering Excitatory Networks from Discrete Event Streams with Applications to Neuronal Spike Train Analysis
"... Abstract—Mining temporal network models from discrete event streams is an important problem with applications in computational neuroscience, physical plant diagnostics, and humancomputer interaction modeling. We focus in this paper on temporal models representable as excitatory networks where all c ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
Abstract—Mining temporal network models from discrete event streams is an important problem with applications in computational neuroscience, physical plant diagnostics, and humancomputer interaction modeling. We focus in this paper on temporal models representable as excitatory networks where all connections are stimulative, rather than inhibitive. Through this emphasis on excitatory networks, we show how they can be learned by creating bridges to frequent episode mining. Specifically, we show that frequent episodes help identify nodes with high mutual information relationships and which can be summarized into a dynamic Bayesian network (DBN). To demonstrate the practical feasibility of our approach, we show how excitatory networks can be inferred from both mathematical models of spiking neurons as well as real neuroscience datasets.
Inferring dynamic bayesian networks using frequent episode mining
 CoRR
"... Motivation: Several different threads of research have been proposed for modeling and mining temporal data. On the one hand, approaches such as dynamic Bayesian networks (DBNs) provide a formal probabilistic basis to model relationships between timeindexed random variables but these models are intr ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
Motivation: Several different threads of research have been proposed for modeling and mining temporal data. On the one hand, approaches such as dynamic Bayesian networks (DBNs) provide a formal probabilistic basis to model relationships between timeindexed random variables but these models are intractable to learn in the general case. On the other, algorithms such as frequent episode mining are scalable to large datasets but do not exhibit the rigorous probabilistic interpretations that are the mainstay of the graphical models literature. Results: We present a unification of these two seemingly diverse threads of research, by demonstrating how dynamic (discrete) Bayesian networks can be inferred from the results of frequent episode mining. This helps bridge the modeling emphasis of the former with the counting emphasis of the latter. First, we show how, under reasonable assumptions on data characteristics and on influences of random variables, the optimal DBN structure can be computed using a greedy, local, algorithm. Next, we connect the optimality of the DBN structure with the notion of fixeddelay episodes and their counts of distinct occurrences. Finally, to demonstrate the practical feasibility of our approach, we focus on a specific (but broadly applicable) class of networks, called excitatory networks, and show how the search for the optimal DBN structure can be conducted using just information from frequent episodes. Application on datasets gathered from mathematical models of spiking neurons as well as real neuroscience datasets are presented.
Data mining lab
"... According to different typologies of activity and priority, risks can assume diverse meanings and it can be assessed in different ways. In general risk is measured in terms of a probability combination of an event (frequency) and its consequence (impact). To estimate the frequency and the impact (se ..."
Abstract
 Add to MetaCart
According to different typologies of activity and priority, risks can assume diverse meanings and it can be assessed in different ways. In general risk is measured in terms of a probability combination of an event (frequency) and its consequence (impact). To estimate the frequency and the impact (severity) historical data or expert opinions (either qualitative or quantitative data) are used. In the case of enterprise risk assessment the considered risks are, for instance, strategic, operational, legal and of image, which many times are difficult to be quantified. So in most cases only expert data, gathered in general by scorecard approaches, are available for risk analysis. The Bayesian Network is a useful tool to integrate different information and in particular to study the risk’s joint distribution by using data collected from experts. In this paper we want to show a possible approach for building a Bayesian networks in the particular case in which only prior probabilities of node states and marginal correlations between nodes are available, and when the variables have only two states. 1
Under consideration for publication in Knowledge and Information Systems Discovering Excitatory Relationships using Dynamic Bayesian Networks
"... Abstract. Mining temporal network models from discrete event streams is an important problem with applications in computational neuroscience, physical plant diagnostics, and humancomputer interaction modeling. In this paper we introduce the notion of excitatory networks which are essentially tempor ..."
Abstract
 Add to MetaCart
Abstract. Mining temporal network models from discrete event streams is an important problem with applications in computational neuroscience, physical plant diagnostics, and humancomputer interaction modeling. In this paper we introduce the notion of excitatory networks which are essentially temporal models where all connections are stimulative, rather than inhibitive. The emphasis on excitatory connections facilitates learning of network models by creating bridges to frequent episode mining. Specifically, we show that frequent episodes help identify nodes with high mutual information relationships and that such relationships can be summarized into a dynamic Bayesian network (DBN). This leads to an algorithm that is significantly faster than stateoftheart methods for inferring DBNs, while simultaneously providing theoretical guarantees on network optimality. We demonstrate the advantages of our approach through an application in neuroscience, where we show how strong excitatory networks can be efficiently inferred from both mathematical models of spiking neurons and several real neuroscience datasets.