Results 1 
3 of
3
Random Algorithms for the Loop Cutset Problem
 Journal of Artificial Intelligence Research
, 1999
"... We show how to find a minimum loop cutset in a Bayesian network with high probability. Finding such a loop cutset is the first step in Pearl's method of conditioning for inference. Our random algorithm for finding a loop cutset, called RepeatedWGuessI, outputs a minimum loop cutset, after O(c ..."
Abstract

Cited by 81 (2 self)
 Add to MetaCart
We show how to find a minimum loop cutset in a Bayesian network with high probability. Finding such a loop cutset is the first step in Pearl's method of conditioning for inference. Our random algorithm for finding a loop cutset, called RepeatedWGuessI, outputs a minimum loop cutset, after O(c \Delta 6 k kn) steps, with probability at least 1 \Gamma (1 \Gamma 1 6 k ) c6 k , where c ? 1 is a constant specified by the user, k is the size of a minimum weight loop cutset, and n is the number of vertices. We also show empirically that a variant of this algorithm, called WRA, often finds a loop cutset that is closer to the minimum loop cutset than the ones found by the best deterministic algorithms known. 1
Optimization of Pearl's Method of Conditioning and GreedyLike Approximation Algorithms for the Vertex Feedback Set Problem
 Artificial Intelligence
, 1997
"... We show how to find a small loop cutset in a Bayesian network. ..."
Abstract

Cited by 20 (4 self)
 Add to MetaCart
We show how to find a small loop cutset in a Bayesian network.
Conditioning Methods for Exact and Approximate Inference in Causal Networks
 In Proceedings of the 11th Conference on Uncertainty in Artificial Intelligence
, 1995
"... We present two algorithms for exact and approximate inference in causal networks. The first algorithm, dynamic conditioning, is a refinement of cutset conditioning that has linear complexity on some networks for which cutset conditioning is exponential. The second algorithm, Bconditioning, is an al ..."
Abstract

Cited by 9 (0 self)
 Add to MetaCart
We present two algorithms for exact and approximate inference in causal networks. The first algorithm, dynamic conditioning, is a refinement of cutset conditioning that has linear complexity on some networks for which cutset conditioning is exponential. The second algorithm, Bconditioning, is an algorithm for approximate inference that allows one to tradeoff the quality of approximations with the computation time. We also present some experimental results illustrating the properties of the proposed algorithms. 1 INTRODUCTION Cutset conditioning is one of the earliest algorithms for evaluating multiply connected networks [6]. Cutset conditioning works by reducing multiply connected networks into a number of conditioned singly connected networks, each corresponding to a particular instantiation of a loop cutset [6, 7]. Cutset conditioning is simple, but leads to an exponential number of conditioned networks. Therefore, cutset conditioning is not practical unless the size of a loop cut...