Results 1  10
of
60
Bucket Elimination: A Unifying Framework for Probabilistic Inference
, 1996
"... . Probabilistic inference algorithms for belief updating, finding the most probable explanation, the maximum a posteriori hypothesis, and the maximum expected utility are reformulated within the bucket elimination framework. This emphasizes the principles common to many of the algorithms appearing ..."
Abstract

Cited by 293 (34 self)
 Add to MetaCart
. Probabilistic inference algorithms for belief updating, finding the most probable explanation, the maximum a posteriori hypothesis, and the maximum expected utility are reformulated within the bucket elimination framework. This emphasizes the principles common to many of the algorithms appearing in the probabilistic inference literature and clarifies the relationship of such algorithms to nonserial dynamic programming algorithms. A general method for combining conditioning and bucket elimination is also presented. For all the algorithms, bounds on complexity are given as a function of the problem's structure. 1. Overview Bucket elimination is a unifying algorithmic framework that generalizes dynamic programming to accommodate algorithms for many complex problemsolving and reasoning activities, including directional resolution for propositional satisfiability (Davis and Putnam, 1960), adaptive consistency for constraint satisfaction (Dechter and Pearl, 1987), Fourier and Gaussian el...
From Influence Diagrams to Junction Trees
 PROCEEDINGS OF THE TENTH CONFERENCE ON UNCERTAINTY IN ARTIFICIAL INTELLIGENCE
, 1994
"... We present an approach to the solution of decision problems formulated as influence diagrams. This approach involves a special triangulation of the underlying graph, the construction of a junction tree with special properties, and a message passing algorithm operating on the junction tree for comput ..."
Abstract

Cited by 111 (15 self)
 Add to MetaCart
We present an approach to the solution of decision problems formulated as influence diagrams. This approach involves a special triangulation of the underlying graph, the construction of a junction tree with special properties, and a message passing algorithm operating on the junction tree for computation of expected utilities and optimal decision policies.
Exploiting Structure to Efficiently Solve Large Scale Partially Observable Markov Decision Processes
, 2005
"... Partially observable Markov decision processes (POMDPs) provide a natural and principled framework to model a wide range of sequential decision making problems under uncertainty. To date, the use of POMDPs in realworld problems has been limited by the poor scalability of existing solution algorithm ..."
Abstract

Cited by 62 (6 self)
 Add to MetaCart
Partially observable Markov decision processes (POMDPs) provide a natural and principled framework to model a wide range of sequential decision making problems under uncertainty. To date, the use of POMDPs in realworld problems has been limited by the poor scalability of existing solution algorithms, which can only solve problems with up to ten thousand states. In fact, the complexity of finding an optimal policy for a finitehorizon discrete POMDP is PSPACEcomplete. In practice, two important sources of intractability plague most solution algorithms: large policy spaces and large state spaces. On the other hand,
Bayesball: The rational pastime (for determining irrelevance and requisite information in belief networks and influence diagrams
 In Uncertainty in Artificial Intelligence
, 1998
"... One of the benefits of belief networks and influence diagrams is that so much knowledge is captured in the graphical structure. In particular, statements of conditional irrelevance (or independence) can be verified in time linear in the size of the graph. To resolve a particular inference query or d ..."
Abstract

Cited by 42 (3 self)
 Add to MetaCart
One of the benefits of belief networks and influence diagrams is that so much knowledge is captured in the graphical structure. In particular, statements of conditional irrelevance (or independence) can be verified in time linear in the size of the graph. To resolve a particular inference query or decision problem, only some of the possible states and probability distributions must be specified, the“requisite information.” This paper presents a new, simple, and efficient “Bayesball ” algorithm which is wellsuited to both new students of belief networks and state of the art implementations. The Bayesball algorithm determines irrelevant sets and requisite information more efficiently than existing methods, and is linear in the size of the graph for belief networks and influence diagrams.
Robot Trajectory Optimization using Approximate Inference
"... The general stochastic optimal control (SOC) problem in robotics scenarios is often too complex to be solved exactly and in near real time. A classical approximate solution is to first compute an optimal (deterministic) trajectory and then solve a local linearquadraticgaussian (LQG) perturbation m ..."
Abstract

Cited by 41 (14 self)
 Add to MetaCart
The general stochastic optimal control (SOC) problem in robotics scenarios is often too complex to be solved exactly and in near real time. A classical approximate solution is to first compute an optimal (deterministic) trajectory and then solve a local linearquadraticgaussian (LQG) perturbation model to handle the system stochasticity. We present a new algorithm for this approach which improves upon previous algorithms like iLQG. We consider a probabilistic model for which the maximum likelihood (ML) trajectory coincides with the optimal trajectory and which, in the LQG case, reproduces the classical SOC solution. The algorithm then utilizes approximate inference methods (similar to expectation propagation) that efficiently generalize to nonLQG systems. We demonstrate the algorithm on a simulated 39DoF humanoid robot. 1.
A Computational Theory of Decision Networks
 International Journal of Approximate Reasoning
, 1994
"... This paper is about how to represent and solve decision problems in Bayesian decision theory (e.g. [6]). A general representation named decision networks is proposed based on influence diagrams [10]. This new representation incorporates the idea, from Markov decision process (e.g. [5]), that a decis ..."
Abstract

Cited by 33 (2 self)
 Add to MetaCart
This paper is about how to represent and solve decision problems in Bayesian decision theory (e.g. [6]). A general representation named decision networks is proposed based on influence diagrams [10]. This new representation incorporates the idea, from Markov decision process (e.g. [5]), that a decision may be conditionally independent of certain pieces of available information. It also allows multiple cooperative agents and facilitates the exploitation of separability in the utility function. Decision networks inherit the advantages of both influence diagrams and Markov decision processes, which makes them a better representation framework for decision analysis, planning under uncertainty, medical diagnosis and treatment.
Planning and control in stochastic domains with imperfect information
, 1997
"... Partially observable Markov decision processes (POMDPs) can be used to model complex control problems that include both action outcome uncertainty and imperfect observability. A control problem within the POMDP framework is expressed as a dynamic optimization problem with a value function that combi ..."
Abstract

Cited by 32 (6 self)
 Add to MetaCart
Partially observable Markov decision processes (POMDPs) can be used to model complex control problems that include both action outcome uncertainty and imperfect observability. A control problem within the POMDP framework is expressed as a dynamic optimization problem with a value function that combines costs or rewards from multiple steps. Although the POMDP framework is more expressive than other simpler frameworks, like Markov decision processes (MDP), its associated optimization methods are more demanding computationally and only very small problems can be solved exactly in practice. Our work focuses on two possible approaches that can be used to solve larger problems: approximation methods and exploitation of additional problem structure. First, a number of new eÆcient approximation methods and improvements of existing algorithms are proposed. These include (1) the fast informed bound method based on approximate dynamic programming updates that lead to piecewise linear and convex v...
Probabilistic Inference in Influence Diagrams
 Computational Intelligence
, 1998
"... This paper is about reducing influence diagram (ID) evaluation into Bayesian network (BN) inference problems. Such reduction is interesting because it enables one to readily use one's favorite BN inference algorithm to efficiently evaluate IDs. Two such reduction methods have been proposed pre ..."
Abstract

Cited by 32 (0 self)
 Add to MetaCart
This paper is about reducing influence diagram (ID) evaluation into Bayesian network (BN) inference problems. Such reduction is interesting because it enables one to readily use one's favorite BN inference algorithm to efficiently evaluate IDs. Two such reduction methods have been proposed previously (Cooper 1988, Shachter and Peot 1992). This paper proposes a new method. The BN inference problems induced by the mew method are much easier to solve than those induced by the two previous methods.
Myopic Value of Information in Influence Diagrams
 IN UAI
, 1997
"... We present a method for calculation of myopic value of information in influence diagrams (Howard & Matheson, 1981) based on the strong junction tree framework (Jensen et al., 1994) . An influence diagram specifies a certain order of observations and decisions through its structure. This order is re ..."
Abstract

Cited by 30 (3 self)
 Add to MetaCart
We present a method for calculation of myopic value of information in influence diagrams (Howard & Matheson, 1981) based on the strong junction tree framework (Jensen et al., 1994) . An influence diagram specifies a certain order of observations and decisions through its structure. This order is reflected in the corresponding junction trees by the order in which the nodes are marginalized. This order of marginalization can be changed by table expansion and use of control structures, and this facilitates for calculating the expected value of information for different information scenarios within the same junction tree. In effect, a strong junction tree with expanded tables may be used for calculating the value of information between several scenarios with different observationdecision order. We compare our method to other methods for calculating the value of information in influence diagrams.
Efficient value of information computation
 In Proceedings of the 15th Annual Conference on Uncertainty in Artificial Intelligence
, 1999
"... One of the most useful sensitivity analysis techniques of decision analysis is the computation of value of information (or clairvoyance), the difference in value obtained by changing the decisions by which some of the uncertainties are observed. In this paper, some simple but powerful extensions to ..."
Abstract

Cited by 23 (3 self)
 Add to MetaCart
One of the most useful sensitivity analysis techniques of decision analysis is the computation of value of information (or clairvoyance), the difference in value obtained by changing the decisions by which some of the uncertainties are observed. In this paper, some simple but powerful extensions to previous algorithms are introduced which allow an efficient value of information calculation on the rooted cluster tree (or strong junction tree) used to solve the original decision problem.