Results 1  10
of
30
Inference in belief networks: A procedural guide
 International Journal of Approximate Reasoning
, 1996
"... Belief networks are popular tools for encoding uncertainty in expert systems. These networks rely on inference algorithms to compute beliefs in the context of observed evidence. One established method for exact inference onbelief networks is the Probability Propagation in Trees of Clusters (PPTC) al ..."
Abstract

Cited by 149 (6 self)
 Add to MetaCart
Belief networks are popular tools for encoding uncertainty in expert systems. These networks rely on inference algorithms to compute beliefs in the context of observed evidence. One established method for exact inference onbelief networks is the Probability Propagation in Trees of Clusters (PPTC) algorithm, as developed byLauritzen and Spiegelhalter and re ned by Jensen et al. [1, 2, 3] PPTC converts the belief network into a secondary structure, then computes probabilities by manipulating the secondary structure. In this document, we provide a selfcontained, procedural guide to understanding and implementing PPTC. We synthesize various optimizations to PPTC that are scattered throughout the literature. We articulate undocumented, \open secrets " that are vital to producing a robust and e cient implementation of PPTC. We hope that this document makes probabilistic inference more accessible and a ordable to those without extensive prior exposure.
Policy Recognition in the Abstract Hidden Markov Model
 Journal of Artificial Intelligence Research
, 2002
"... In this paper, we present a method for recognising an agent's behaviour in dynamic, noisy, uncertain domains, and across multiple levels of abstraction. We term this problem online plan recognition under uncertainty and view it generally as probabilistic inference on the stochastic process represen ..."
Abstract

Cited by 121 (16 self)
 Add to MetaCart
In this paper, we present a method for recognising an agent's behaviour in dynamic, noisy, uncertain domains, and across multiple levels of abstraction. We term this problem online plan recognition under uncertainty and view it generally as probabilistic inference on the stochastic process representing the execution of the agent's plan. Our contributions in this paper are twofold. In terms of probabilistic inference, we introduce the Abstract Hidden Markov Model (AHMM), a novel type of stochastic processes, provide its dynamic Bayesian network (DBN) structure and analyse the properties of this network. We then describe an application of the RaoBlackwellised Particle Filter to the AHMM which allows us to construct an ecient, hybrid inference method for this model. In terms of plan recognition, we propose a novel plan recognition framework based on the AHMM as the plan execution model. The RaoBlackwellised hybrid inference for AHMM can take advantage of the independence properties inherent in a model of plan execution, leading to an algorithm for online probabilistic plan recognition that scales well with the number of levels in the plan hierarchy. This illustrates that while stochastic models for plan execution can be complex, they exhibit special structures which, if exploited, can lead to efficient plan recognition algorithms. We demonstrate the usefulness of the AHMM framework via a behaviour recognition system in a complex spatial environment using distributed video surveillance data.
MachineLearning Research  Four Current Directions
"... Machine Learning research has been making great progress in many directions. This article summarizes four of these directions and discusses some current open problems. The four directions are (a) improving classification accuracy by learning ensembles of classifiers, (b) methods for scaling up super ..."
Abstract

Cited by 114 (1 self)
 Add to MetaCart
Machine Learning research has been making great progress in many directions. This article summarizes four of these directions and discusses some current open problems. The four directions are (a) improving classification accuracy by learning ensembles of classifiers, (b) methods for scaling up supervised learning algorithms, (c) reinforcement learning, and (d) learning complex stochastic models.
Variational Probabilistic Inference and the QMRDT Network
 JOURNAL OF ARTIFICIAL INTELLIGENCE RESEARCH
, 1999
"... We describe a variational approximation method for efficient inference in largescale probabilistic models. Variational methods are deterministic procedures that provide approximations to marginal and conditional probabilities of interest. They provide alternatives to approximate inference method ..."
Abstract

Cited by 57 (3 self)
 Add to MetaCart
We describe a variational approximation method for efficient inference in largescale probabilistic models. Variational methods are deterministic procedures that provide approximations to marginal and conditional probabilities of interest. They provide alternatives to approximate inference methods based on stochastic sampling or search. We describe a variational approach to the problem of diagnostic inference in the "Quick Medical Reference" (QMR) network. The QMR network is a largescale probabilistic graphical model built on statistical and expert knowledge. Exact probabilistic inference is infeasible in this model for all but a small set of cases. We evaluate our variational inference algorithm on a large set of diagnostic test cases, comparing the algorithm to a stateoftheart stochastic sampling method.
An Optimal Approximation Algorithm For Bayesian Inference
 Artificial Intelligence
, 1997
"... Approximating the inference probability Pr[X = xjE = e] in any sense, even for a single evidence node E, is NPhard. This result holds for belief networks that are allowed to contain extreme conditional probabilitiesthat is, conditional probabilities arbitrarily close to 0. Nevertheless, all p ..."
Abstract

Cited by 48 (2 self)
 Add to MetaCart
Approximating the inference probability Pr[X = xjE = e] in any sense, even for a single evidence node E, is NPhard. This result holds for belief networks that are allowed to contain extreme conditional probabilitiesthat is, conditional probabilities arbitrarily close to 0. Nevertheless, all previous approximation algorithms have failed to approximate efficiently many inferences, even for belief networks without extreme conditional probabilities. We prove that we can approximate efficiently probabilistic inference in belief networks without extreme conditional probabilities. We construct a randomized approximation algorithmthe boundedvariance algorithmthat is a variant of the known likelihoodweighting algorithm. The boundedvariance algorithm is the first algorithm with provably fast inference approximation on all belief networks without extreme conditional probabilities. From the boundedvariance algorithm, we construct a deterministic approximation algorithm u...
Localized Partial Evaluation of Belief Networks
, 1995
"... Most algorithms for propagating evidence through belief networks have been exact and exhaustive: they produce an exact (pointvalued) marginal probability for every node in the network. Often, however, an application will not need information about every node in the network nor will it need exact pr ..."
Abstract

Cited by 43 (1 self)
 Add to MetaCart
Most algorithms for propagating evidence through belief networks have been exact and exhaustive: they produce an exact (pointvalued) marginal probability for every node in the network. Often, however, an application will not need information about every node in the network nor will it need exact probabilities. We present the localized partial evaluation (LPE) propagation algorithm, which computes interval bounds on the marginal probability of a specified query node by examining a subset of the nodes in the entire network. Conceptually, LPE ignores parts of the network that are "too far away" from the queried node to have much impact on its value. LPE has the "anytime" property of being able to produce better solutions (tighter intervals) given more time to consider more of the network. 1 Introduction Belief networks provide a way of encoding knowledge about the probabilistic dependencies and independencies of a set of variables in some domain. Variables are encoded as nodes in the ne...
Tracking and Surveillance in WideArea Spatial Environments Using the Abstract Hidden Markov Model
 Intl. J. of Pattern Rec. and AI
, 2001
"... In this paper, we consider the problem of tracking an object and predicting the object future trajectory in a widearea environment, with complex spatial layout and the use of multiple sensors/cameras. To solve this problem, there is a need for representing the dynamic and noisy data in the tracking ..."
Abstract

Cited by 37 (5 self)
 Add to MetaCart
In this paper, we consider the problem of tracking an object and predicting the object future trajectory in a widearea environment, with complex spatial layout and the use of multiple sensors/cameras. To solve this problem, there is a need for representing the dynamic and noisy data in the tracking tasks, and dealing with them at different levels of detail. We employ the Abstract Hidden Markov Models (AHMM), an extension of the wellknown Hidden Markov Model (HMM) and a special type of Dynamic Probabilistic Network (DPN), as our underlying representation framework. The AHMM allows us to explicitly encode the hierarchy of connected spatial locations, making it scalable to the size of the environment being modelled. We describe an application for tracking human movement in a...
A Survey of Algorithms for RealTime Bayesian Network Inference
 In In the joint AAAI02/KDD02/UAI02 workshop on RealTime Decision Support and Diagnosis Systems
, 2002
"... As Bayesian networks are applied to more complex and realistic realworld applications, the development of more efficient inference algorithms working under realtime constraints is becoming more and more important. This paper presents a survey of various exact and approximate Bayesian network ..."
Abstract

Cited by 32 (2 self)
 Add to MetaCart
As Bayesian networks are applied to more complex and realistic realworld applications, the development of more efficient inference algorithms working under realtime constraints is becoming more and more important. This paper presents a survey of various exact and approximate Bayesian network inference algorithms. In particular, previous research on realtime inference is reviewed. It provides a framework for understanding these algorithms and the relationships between them. Some important issues in realtime Bayesian networks inference are also discussed.
Inference in Bayesian Networks
, 1999
"... A Bayesian network is a compact, expressive representation of uncertain relationships among parameters in a domain. In this article, I introduce basic methods for computing with Bayesian networks, starting with the simple idea of summing the probabilities of events of interest. The article introduce ..."
Abstract

Cited by 31 (0 self)
 Add to MetaCart
A Bayesian network is a compact, expressive representation of uncertain relationships among parameters in a domain. In this article, I introduce basic methods for computing with Bayesian networks, starting with the simple idea of summing the probabilities of events of interest. The article introduces major current methods for exact computation, briefly surveys approximation methods, and closes with a brief discussion of open issues.
Probabilistic conflicts in a search algorithm for estimating posterior probabilities in Bayesian networks
, 1996
"... This paper presents a search algorithm for estimating posterior probabilities in discrete Bayesian networks. It shows how conflicts (as used in consistencybased diagnosis) can be adapted to speed up the search. This algorithm is especially suited to the case where there are skewed distributions, al ..."
Abstract

Cited by 23 (6 self)
 Add to MetaCart
This paper presents a search algorithm for estimating posterior probabilities in discrete Bayesian networks. It shows how conflicts (as used in consistencybased diagnosis) can be adapted to speed up the search. This algorithm is especially suited to the case where there are skewed distributions, although nothing about the algorithm or the definitions depends on skewness of distributions. The general idea is to forward simulate the network, based on the `normal' values for each variable (the value with high probability given its parents). When a predicted value is at odds with the observations, we analyse which variables were responsible for the expectation failure  these form a conflict  and continue forward simulation considering different values for these variables. This results in a set of possible worlds from which posterior probabilities  together with error bounds  can be 1 derived. Empirical results with Bayesian networks having tens of thousands of nodes are presented.