Results 1  10
of
100
Bucket Elimination: A Unifying Framework for Reasoning
"... Bucket elimination is an algorithmic framework that generalizes dynamic programming to accommodate many problemsolving and reasoning tasks. Algorithms such as directionalresolution for propositional satisfiability, adaptiveconsistency for constraint satisfaction, Fourier and Gaussian elimination ..."
Abstract

Cited by 278 (62 self)
 Add to MetaCart
Bucket elimination is an algorithmic framework that generalizes dynamic programming to accommodate many problemsolving and reasoning tasks. Algorithms such as directionalresolution for propositional satisfiability, adaptiveconsistency for constraint satisfaction, Fourier and Gaussian elimination for solving linear equalities and inequalities, and dynamic programming for combinatorial optimization, can all be accommodated within the bucket elimination framework. Many probabilistic inference tasks can likewise be expressed as bucketelimination algorithms. These include: belief updating, finding the most probable explanation, and expected utility maximization. These algorithms share the same performance guarantees; all are time and space exponential in the inducedwidth of the problem's interaction graph. While elimination strategies have extensive demands on memory, a contrasting class of algorithms called "conditioning search" require only linear space. Algorithms in this class split a problem into subproblems by instantiating a subset of variables, called a conditioning set, or a cutset. Typical examples of conditioning search algorithms are: backtracking (in constraint satisfaction), and branch and bound (for combinatorial optimization). The paper presents the bucketelimination framework as a unifying theme across probabilistic and deterministic reasoning tasks and show how conditioning search can be augmented to systematically trade space for time.
A Guide to the Literature on Learning Probabilistic Networks From Data
, 1996
"... This literature review discusses different methods under the general rubric of learning Bayesian networks from data, and includes some overlapping work on more general probabilistic networks. Connections are drawn between the statistical, neural network, and uncertainty communities, and between the ..."
Abstract

Cited by 172 (0 self)
 Add to MetaCart
This literature review discusses different methods under the general rubric of learning Bayesian networks from data, and includes some overlapping work on more general probabilistic networks. Connections are drawn between the statistical, neural network, and uncertainty communities, and between the different methodological communities, such as Bayesian, description length, and classical statistics. Basic concepts for learning and Bayesian networks are introduced and methods are then reviewed. Methods are discussed for learning parameters of a probabilistic network, for learning the structure, and for learning hidden variables. The presentation avoids formal definitions and theorems, as these are plentiful in the literature, and instead illustrates key concepts with simplified examples. Keywords Bayesian networks, graphical models, hidden variables, learning, learning structure, probabilistic networks, knowledge discovery. I. Introduction Probabilistic networks or probabilistic gra...
Multiagent influence diagrams for representing and solving games
 Games and Economic Behavior
, 2001
"... The traditional representations of games using the extensive form or the strategic (normal) form obscure much of the structure that is present in realworld games. In this paper, we propose a new representation language for general multiplayer games — multiagent influence diagrams (MAIDs). This rep ..."
Abstract

Cited by 157 (2 self)
 Add to MetaCart
The traditional representations of games using the extensive form or the strategic (normal) form obscure much of the structure that is present in realworld games. In this paper, we propose a new representation language for general multiplayer games — multiagent influence diagrams (MAIDs). This representation extends graphical models for probability distributions to a multiagent decisionmaking context. MAIDs explicitly encode structure involving the dependence relationships among variables. As a consequence, we can define a notion of strategic relevance of one decision variable to another: ¢¡ is strategically relevant to if, to optimize the decision rule at, the decision maker needs to take into consideration the decision rule at ¡. We provide a sound and complete graphical criterion for determining strategic relevance. We then show how strategic relevance can be used to detect structure in games, allowing a large game to be broken up into a set of interacting smaller games, which can be solved in sequence. We show that this decomposition can lead to substantial savings in the computational cost of finding Nash equilibria in these games. 1
Inference in belief networks: A procedural guide
 International Journal of Approximate Reasoning
, 1996
"... Belief networks are popular tools for encoding uncertainty in expert systems. These networks rely on inference algorithms to compute beliefs in the context of observed evidence. One established method for exact inference onbelief networks is the Probability Propagation in Trees of Clusters (PPTC) al ..."
Abstract

Cited by 149 (6 self)
 Add to MetaCart
Belief networks are popular tools for encoding uncertainty in expert systems. These networks rely on inference algorithms to compute beliefs in the context of observed evidence. One established method for exact inference onbelief networks is the Probability Propagation in Trees of Clusters (PPTC) algorithm, as developed byLauritzen and Spiegelhalter and re ned by Jensen et al. [1, 2, 3] PPTC converts the belief network into a secondary structure, then computes probabilities by manipulating the secondary structure. In this document, we provide a selfcontained, procedural guide to understanding and implementing PPTC. We synthesize various optimizations to PPTC that are scattered throughout the literature. We articulate undocumented, \open secrets " that are vital to producing a robust and e cient implementation of PPTC. We hope that this document makes probabilistic inference more accessible and a ordable to those without extensive prior exposure.
Thin Junction Tree Filters for Simultaneous Localization and Mapping
 In Intl. Joint Conf. on Artificial Intelligence (IJCAI
, 2003
"... Simultaneous Localization and Mapping (SLAM) is a fundamental problem in mobile robotics: while a robot navigates in an unknown environment, it must incrementally build a map of its surroundings and localize itself within that map. Traditional approaches to the problem are based upon Kalman filters, ..."
Abstract

Cited by 126 (1 self)
 Add to MetaCart
Simultaneous Localization and Mapping (SLAM) is a fundamental problem in mobile robotics: while a robot navigates in an unknown environment, it must incrementally build a map of its surroundings and localize itself within that map. Traditional approaches to the problem are based upon Kalman filters, but suffer from complexity issues: the size of the belief state and the time complexity of the filtering operation grow quadratically in the size of the map. This paper presents a filtering technique that maintains a tractable approximation of the filtered belief state as a thin junction tree. The junction tree grows under measurement and motion updates and is periodically "thinned" to remain tractable via efficient maximum likelihood projections. When applied to the SLAM problem, these thin junction tree filters have a linearspace belief state representation, and use a lineartime filtering operation. Further approximation can yield a constanttime filtering operation, at the expense of delaying the incorporation of observations into the majority of the map. Experiments on a suite of SLAM problems validate the approach.
Random Algorithms for the Loop Cutset Problem
 Journal of Artificial Intelligence Research
, 1999
"... We show how to find a minimum loop cutset in a Bayesian network with high probability. Finding such a loop cutset is the first step in Pearl's method of conditioning for inference. Our random algorithm for finding a loop cutset, called RepeatedWGuessI, outputs a minimum loop cutset, after O(c ..."
Abstract

Cited by 81 (2 self)
 Add to MetaCart
We show how to find a minimum loop cutset in a Bayesian network with high probability. Finding such a loop cutset is the first step in Pearl's method of conditioning for inference. Our random algorithm for finding a loop cutset, called RepeatedWGuessI, outputs a minimum loop cutset, after O(c \Delta 6 k kn) steps, with probability at least 1 \Gamma (1 \Gamma 1 6 k ) c6 k , where c ? 1 is a constant specified by the user, k is the size of a minimum weight loop cutset, and n is the number of vertices. We also show empirically that a variant of this algorithm, called WRA, often finds a loop cutset that is closer to the minimum loop cutset than the ones found by the best deterministic algorithms known. 1
An Algorithm for Deciding if a Set of Observed Independencies Has a Causal Explanation
 Proc. of the Eighth Conference on Uncertainty in Artificial Intelligence
, 1992
"... In a previous paper [8] we presented an algorithm for extracting causal influences from independence information, where a causal influence was defined as the existence of a directed arc in all minimal causal models consistent with the data. In this paper we address the question of deciding whether t ..."
Abstract

Cited by 60 (1 self)
 Add to MetaCart
In a previous paper [8] we presented an algorithm for extracting causal influences from independence information, where a causal influence was defined as the existence of a directed arc in all minimal causal models consistent with the data. In this paper we address the question of deciding whether there exists a causal model that explains ALL the observed dependencies and independencies. Formally, given a list M of conditional independence statements, it is required to decide whether there exists a directed acyclic graph D that is perfectly consistent with M, namely, every statement in M, and no other, is reflected via dseparation in D. We present and analyze an effective algorithm that tests for the existence of such a dag, and produces one, if it exists. Key words: Causal modeling, graphoids, conditional independence. 1 1 Introduction Directed acyclic graphs (dags) have been widely used for modeling statistical data. Starting with the pioneering work of Sewal Wright [...
Axioms of Causal Relevance
 Artificial Intelligence
, 1996
"... This paper develops axioms and formal semantics for statements of the form "X is causally irrelevant to Y in context Z," which we interpret to mean "Changing X will not affect Y if we hold Z constant." The axiomization of causal irrelevance is contrasted with the axiomization of informational irr ..."
Abstract

Cited by 54 (15 self)
 Add to MetaCart
This paper develops axioms and formal semantics for statements of the form "X is causally irrelevant to Y in context Z," which we interpret to mean "Changing X will not affect Y if we hold Z constant." The axiomization of causal irrelevance is contrasted with the axiomization of informational irrelevance, as in "Learning X will not alter our belief in Y , once we know Z." Two versions of causal irrelevance are analyzed, probabilistic and deterministic. We show that, unless stability is assumed, the probabilistic definition yields a very loose structure, that is governed by just two trivial axioms. Under the stability assumption, probabilistic causal irrelevance is isomorphic to path interception in cyclic graphs. Under the deterministic definition, causal irrelevance complies with all of the axioms of path interception in cyclic graphs, with the exception of transitivity. We compare our formalism to that of [Lewis, 1973], and offer a graphical method of proving theorems abou...
Global Conditioning for Probabilistic Inference in Belief Networks
 In Proc. Tenth Conference on Uncertainty in AI
, 1994
"... In this paper we propose a new approach to probabilistic inference on belief networks, global conditioning, which is a simple generalization of Pearl's (1986b) method of loopcutset conditioning. We show that global conditioning, as well as loopcutset conditioning, can be thought of as a speci ..."
Abstract

Cited by 46 (0 self)
 Add to MetaCart
In this paper we propose a new approach to probabilistic inference on belief networks, global conditioning, which is a simple generalization of Pearl's (1986b) method of loopcutset conditioning. We show that global conditioning, as well as loopcutset conditioning, can be thought of as a special case of the method of Lauritzen and Spiegelhalter (1988) as refined by Jensen et al (1990a; 1990b).