Results 1  10
of
95
Dynamic Bayesian Networks: Representation, Inference and Learning
, 2002
"... Modelling sequential data is important in many areas of science and engineering. Hidden Markov models (HMMs) and Kalman filter models (KFMs) are popular for this because they are simple and flexible. For example, HMMs have been used for speech recognition and biosequence analysis, and KFMs have bee ..."
Abstract

Cited by 564 (3 self)
 Add to MetaCart
Modelling sequential data is important in many areas of science and engineering. Hidden Markov models (HMMs) and Kalman filter models (KFMs) are popular for this because they are simple and flexible. For example, HMMs have been used for speech recognition and biosequence analysis, and KFMs have been used for problems ranging from tracking planes and missiles to predicting the economy. However, HMMs
and KFMs are limited in their “expressive power”. Dynamic Bayesian Networks (DBNs) generalize HMMs by allowing the state space to be represented in factored form, instead of as a single discrete random variable. DBNs generalize KFMs by allowing arbitrary probability distributions, not just (unimodal) linearGaussian. In this thesis, I will discuss how to represent many different kinds of models as DBNs, how to perform exact and approximate inference in DBNs, and how to learn DBN models from sequential data.
In particular, the main novel technical contributions of this thesis are as follows: a way of representing
Hierarchical HMMs as DBNs, which enables inference to be done in O(T) time instead of O(T 3), where T is the length of the sequence; an exact smoothing algorithm that takes O(log T) space instead of O(T); a simple way of using the junction tree algorithm for online inference in DBNs; new complexity bounds on exact online inference in DBNs; a new deterministic approximate inference algorithm called factored frontier; an analysis of the relationship between the BK algorithm and loopy belief propagation; a way of
applying RaoBlackwellised particle filtering to DBNs in general, and the SLAM (simultaneous localization
and mapping) problem in particular; a way of extending the structural EM algorithm to DBNs; and a variety of different applications of DBNs. However, perhaps the main value of the thesis is its catholic presentation of the field of sequential data modelling.
Efficient Reasoning in Qualitative Probabilistic Networks
 In Proceedings of the 11th National Conference on Artificial Intelligence (AAAI93
, 1993
"... Qualitative Probabilistic Networks (QPNs) are an abstraction of Bayesian belief networks replacing numerical relations by qualitative influences and synergies [ Wellman, 1990b ] . To reason in a QPN is to find the effect of new evidence on each node in terms of the sign of the change in belief (incr ..."
Abstract

Cited by 50 (7 self)
 Add to MetaCart
Qualitative Probabilistic Networks (QPNs) are an abstraction of Bayesian belief networks replacing numerical relations by qualitative influences and synergies [ Wellman, 1990b ] . To reason in a QPN is to find the effect of new evidence on each node in terms of the sign of the change in belief (increase or decrease). We introduce a polynomial time algorithm for reasoning in QPNs, based on local sign propagation. It extends our previous scheme from singly connected to general multiply connected networks. Unlike existing graphreduction algorithms, it preserves the network structure and determines the effect of evidence on all nodes in the network. This aids metalevel reasoning about the model and automatic generation of intuitive explanations of probabilistic reasoning. Introduction A formal representation should not use more specificity than needed to support the reasoning required of it. The appropriate degree of specificity or numerical precision will vary depending on what kind o...
Conditional Objects as Nonmonotonic Consequence Relationships
 IEEE Trans. Syst. Man Cybern
"... This paper is an investigation of the relationship between conditional objects obtained as a qualitative counterpart to conditional probabilities, and nonmonotonic reasoning. Roughly speaking, a conditional object can be seen as a generic rule which allows us to get a conclusion provided that the av ..."
Abstract

Cited by 37 (9 self)
 Add to MetaCart
This paper is an investigation of the relationship between conditional objects obtained as a qualitative counterpart to conditional probabilities, and nonmonotonic reasoning. Roughly speaking, a conditional object can be seen as a generic rule which allows us to get a conclusion provided that the available information exactly corresponds to the "context" part of the conditional object. This gives freedom for possibly retracting previous conclusions when the available information becomes more specific. Viewed as an inference rule expressing a contextual belief, the conditional object is shown to possess all properties of a wellbehaved nonmonotonic consequence relation when a suitable choice of connectives and deduction operation is made. Using previous results from Adams' conditional probabilistic logic, a logic of conditional objects is proposed. Its axioms and inference rules are those of preferential reasoning logic of Lehmann and colleagues. But the semantics relies on a threevalu...
Inference in Bayesian Networks
, 1999
"... A Bayesian network is a compact, expressive representation of uncertain relationships among parameters in a domain. In this article, I introduce basic methods for computing with Bayesian networks, starting with the simple idea of summing the probabilities of events of interest. The article introduce ..."
Abstract

Cited by 31 (0 self)
 Add to MetaCart
A Bayesian network is a compact, expressive representation of uncertain relationships among parameters in a domain. In this article, I introduce basic methods for computing with Bayesian networks, starting with the simple idea of summing the probabilities of events of interest. The article introduces major current methods for exact computation, briefly surveys approximation methods, and closes with a brief discussion of open issues.
Path Planning under TimeDependent Uncertainty
 In Proceedings of the Eleventh Conference on Uncertainty in Artificial Intelligence
, 1995
"... Standard algorithms for finding the shortest path in a graph require that the cost of a path be additive in edge costs, and typically assume that costs are deterministic. We consider the problem of uncertain edge costs, with potential probabilistic dependencies among the costs. Although these depend ..."
Abstract

Cited by 30 (3 self)
 Add to MetaCart
Standard algorithms for finding the shortest path in a graph require that the cost of a path be additive in edge costs, and typically assume that costs are deterministic. We consider the problem of uncertain edge costs, with potential probabilistic dependencies among the costs. Although these dependencies violate the standard dynamicprogramming decomposition, we identify a weaker stochastic consistency condition that justifies a generalized dynamicprogramming approach based on stochastic dominance. We present a revised pathplanning algorithm and prove that it produces optimal paths under timedependent uncertain costs. We illustrate the algorithm by applying it to a model of stochastic bus networks, and present sample performance results comparing it to some alternatives. For the case where all or some of the uncertainty is resolved during path traversal, we extend the algorithm to produce optimal policies. This report is based on a paper presented at the Eleventh Conference on Unc...
Elicitation of Probabilities for Belief Networks: Combining Qualitative and . . .
 IN UNCERTAINTY IN ARTIFICIAL INTELLIGENCE (95): PROCEEDINGS OF THE 11TH CONFERENCE, LOS ALTOS CA
, 1995
"... Although the usefulness of belief networks for reasoning under uncertainty is widely accepted, obtaining numerical probabilities that they require is still perceived a major obstacle. Often not enough statistical data is available to allow for reliable probability estimation. Available informa ..."
Abstract

Cited by 29 (3 self)
 Add to MetaCart
Although the usefulness of belief networks for reasoning under uncertainty is widely accepted, obtaining numerical probabilities that they require is still perceived a major obstacle. Often not enough statistical data is available to allow for reliable probability estimation. Available information may not be directly amenable for encoding in the network. Finally, domain experts may be reluctant to provide numerical probabilities. In this paper, we propose a method for elicitation of probabilities from a domain expert that is noninvasive and accommodates whatever probabilistic information the expert is willing to state. We express all available information, whether qualitative or quantitative in nature, in a canonical form consisting of (in)equalities expressing constraints on the hyperspace of possible joint probability distributions. We then use this canonical form to derive secondorder probability distributions over the desired probabilities.
Graphoid properties of epistemic irrelevance and independence
, 2005
"... This paper investigates Walley’s concepts of epistemic irrelevance and epistemic independence for imprecise probability models. We study the mathematical properties of irrelevance and independence, and their relation to the graphoid axioms. Examples are given to show that epistemic irrelevance can v ..."
Abstract

Cited by 27 (3 self)
 Add to MetaCart
This paper investigates Walley’s concepts of epistemic irrelevance and epistemic independence for imprecise probability models. We study the mathematical properties of irrelevance and independence, and their relation to the graphoid axioms. Examples are given to show that epistemic irrelevance can violate the symmetry, contraction and intersection axioms, that epistemic independence can violate contraction and intersection, and that this accords with informal notions of irrelevance and independence.
Qualitative Verbal Explanations in Bayesian Belief Networks
, 1996
"... Application of Bayesian belief networks in systems that interact directly with human users, such as decision support systems, requires effective user interfaces. The principal task of such interfaces is bridging the gap between probabilistic models and human intuitive approaches to modeling uncer ..."
Abstract

Cited by 27 (4 self)
 Add to MetaCart
Application of Bayesian belief networks in systems that interact directly with human users, such as decision support systems, requires effective user interfaces. The principal task of such interfaces is bridging the gap between probabilistic models and human intuitive approaches to modeling uncertainty. We describe several methods for automatic generation of qualitative verbal explanations in systems based on Bayesian belief networks. We show simple techniques for explaining the structure of a belief network model and the interactions among its variables. We also present a technique for generating qualitative explanations of reasoning. Keywords: Explanation, Bayesian belief networks, qualitative probabilistic networks 1 Introduction The purpose of computing is insight, not numbers. Richard Wesley Hamming As the increasing number of successful applications in such domains as diagnosis, planning, learning, vision, and natural language processing demonstrates, Bayesian belief ne...
Probabilistic Reasoning in Decision Support Systems: From Computation to Common Sense
, 1993
"... Most areas of engineering, science, and management use important tools based on probabilistic methods. The common thread of the entire spectrum of these tools is aiding in decision making under uncertainty: the choice of an interpretation of reality or the choice of a course of action. Although the ..."
Abstract

Cited by 26 (14 self)
 Add to MetaCart
Most areas of engineering, science, and management use important tools based on probabilistic methods. The common thread of the entire spectrum of these tools is aiding in decision making under uncertainty: the choice of an interpretation of reality or the choice of a course of action. Although the importance of dealing with uncertainty in decision making is widely acknowledged, dissemination of probabilistic and decisiontheoretic methods in Artificial Intelligence has been surprisingly slow. Opponents of probability theory have pointed out three major obstacles to applying it in computerized decision aids: (1) the counterintuitiveness of probabilistic inference, which makes it hard for system builders, experts, and users to translate knowledge into probabilistic form, create knowledge bases, and to interpret results; (2) the quantitative character of probability theory, which implies collection or assessment of vast quantities of numbers and, since these are not always readily available, raises questions about their quality; and (3) closely related to its quantitative character, the computational complexity of probabilistic inference. Its proponents, on the other hand, point
Intercausal Reasoning with Uninstantiated Ancestor Nodes
 In Proceedings of the Ninth Annual Conference on Uncertainty in Artificial Intelligence (UAI93
, 1993
"... Intercausal reasoning is a common inference pattern involving probabilistic dependence of causes of an observed common effect. The sign of this dependence is captured by a qualitative property called product synergy. The current definition of product synergy is insufficient for intercausal rea ..."
Abstract

Cited by 24 (10 self)
 Add to MetaCart
Intercausal reasoning is a common inference pattern involving probabilistic dependence of causes of an observed common effect. The sign of this dependence is captured by a qualitative property called product synergy. The current definition of product synergy is insufficient for intercausal reasoning where there are additional uninstantiated causes of the common effect. We propose a new definition of product synergy and prove its adequacy for intercausal reasoning with direct and indirect evidence for the common effect. The new definition is based on a new property matrix half positive semidefiniteness, a weakened form of matrix positive semidefiniteness. 1