Results 1  10
of
31
Random Algorithms for the Loop Cutset Problem
 Journal of Artificial Intelligence Research
, 1999
"... We show how to find a minimum loop cutset in a Bayesian network with high probability. Finding such a loop cutset is the first step in Pearl's method of conditioning for inference. Our random algorithm for finding a loop cutset, called RepeatedWGuessI, outputs a minimum loop cutset, after O(c ..."
Abstract

Cited by 81 (2 self)
 Add to MetaCart
We show how to find a minimum loop cutset in a Bayesian network with high probability. Finding such a loop cutset is the first step in Pearl's method of conditioning for inference. Our random algorithm for finding a loop cutset, called RepeatedWGuessI, outputs a minimum loop cutset, after O(c \Delta 6 k kn) steps, with probability at least 1 \Gamma (1 \Gamma 1 6 k ) c6 k , where c ? 1 is a constant specified by the user, k is the size of a minimum weight loop cutset, and n is the number of vertices. We also show empirically that a variant of this algorithm, called WRA, often finds a loop cutset that is closer to the minimum loop cutset than the ones found by the best deterministic algorithms known. 1
Global Conditioning for Probabilistic Inference in Belief Networks
 In Proc. Tenth Conference on Uncertainty in AI
, 1994
"... In this paper we propose a new approach to probabilistic inference on belief networks, global conditioning, which is a simple generalization of Pearl's (1986b) method of loopcutset conditioning. We show that global conditioning, as well as loopcutset conditioning, can be thought of as a speci ..."
Abstract

Cited by 46 (0 self)
 Add to MetaCart
In this paper we propose a new approach to probabilistic inference on belief networks, global conditioning, which is a simple generalization of Pearl's (1986b) method of loopcutset conditioning. We show that global conditioning, as well as loopcutset conditioning, can be thought of as a special case of the method of Lauritzen and Spiegelhalter (1988) as refined by Jensen et al (1990a; 1990b).
Probabilistic multiscale image segmentation
 IEEE Transactions on Pattern Analysis and Machine Intelligence
, 1997
"... Abstract—A method is presented to segment multidimensional images using a multiscale (hyperstack) approach with probabilistic linking. A hyperstack is a voxelbased multiscale data structure whose levels are constructed by convolving the original image with a Gaussian kernel of increasing width. Bet ..."
Abstract

Cited by 42 (3 self)
 Add to MetaCart
Abstract—A method is presented to segment multidimensional images using a multiscale (hyperstack) approach with probabilistic linking. A hyperstack is a voxelbased multiscale data structure whose levels are constructed by convolving the original image with a Gaussian kernel of increasing width. Between voxels at adjacent scale levels, childparent linkages are established according to a modeldirected linkage scheme. In the resulting treelike data structure, roots are formed to indicate the most plausible locations in scale space where segments in the original image are represented by a single voxel. The final segmentation is obtained by tracing back the linkages for all roots. The present paper deals with probabilistic (or multiparent) linking, i.e., a setup in which a child voxel can be linked to more than one parent voxel. The multiparent linkage structure is translated into a list of probabilities that are indicative of which voxels are partial volume voxels and to which extent. Probability maps are generated to visualize the progress of weak linkages in scale space when going from fine to coarser scale. This is shown to be a valuable tool for the detection of voxels that are difficult to segment properly. The output of a probabilistic hyperstack can be directly related to the opacities used in volume renderers. Results are shown both for artificial and real world (medical) images. It is demonstrated that probabilistic linking gives a significantly improved segmentation as compared with conventional (singleparent) linking. The improvement is quantitatively supported by an objective evaluation method. Index Terms—Image segmentation, multiscale analysis, scale space, probability maps, partial volume artifact, object definition. 1
Approximation Algorithms for the Feedback Vertex Set Problem with Applications to Constraint Satisfaction and Bayesian Inference
, 1998
"... A feedback vertex set of an undirected graph is a subset of vertices that intersects with the vertex set of each cycle in the graph. Given an undirected graph G with n vertices and weights on its vertices, polynomialtime algorithms are provided for approximating the problem of finding a feedback ve ..."
Abstract

Cited by 31 (4 self)
 Add to MetaCart
A feedback vertex set of an undirected graph is a subset of vertices that intersects with the vertex set of each cycle in the graph. Given an undirected graph G with n vertices and weights on its vertices, polynomialtime algorithms are provided for approximating the problem of finding a feedback vertex set of G with a smallest weight. When the weights of all vertices in G are equal, the performance ratio attained by these algorithms is 4 \Gamma (2=n). This improves a previous algorithm which achieved an approximation factor of O( p log n) for this case. For general vertex weights, the performance ratio becomes minf2\Delta 2 ; 4 log 2 ng where \Delta denotes the maximum degree in G. For the special case of planar graphs this ratio is reduced to 10. An interesting special case of weighted graphs where a performance ratio of 4 \Gamma (2=n) is achieved is the one where a prescribed subset of the vertices, so called blackout vertices, is not allowed to participate in any feedback verte...
Local Conditioning in Bayesian Networks
 Artificial Intelligence
, 1996
"... Local conditioning (LC) is an exact algorithm for computing probability in Bayesian networks, developed as an extension of Kim and Pearl's algorithm for singlyconnected networks. A list of variables associated to each node guarantees that only the nodes inside a loop are conditioned on the variable ..."
Abstract

Cited by 28 (6 self)
 Add to MetaCart
Local conditioning (LC) is an exact algorithm for computing probability in Bayesian networks, developed as an extension of Kim and Pearl's algorithm for singlyconnected networks. A list of variables associated to each node guarantees that only the nodes inside a loop are conditioned on the variable which breaks it. The main advantage of this algorithm is that it computes the probability directly on the original network instead of building a cluster tree, and this can save time when debugging a model and when the sparsity of evidence allows a pruning of the network. The algorithm is also advantageous when some families in the network interact through AND/OR gates. A parallel implementation of the algorithm with a processor for each node is possible even in the case of multiplyconnected networks. 1 Introduction A Bayesian network is an acyclic directed graph in which every node represents a random variable, together with a probability distribution such that P (x 1 ; : : : ; x n ) = ...
Approximating Bayesian Belief Networks by Arc Removal
 IEEE Transactions on Pattern Analysis and Machine Intelligence
, 1997
"... Bayesian belief networks or causal probabilistic networks may reach a certain size and complexity where the computations involved in exact probabilistic inference on the network tend to become rather time consuming. Methods for approximating a network by a simpler one allow the computational complex ..."
Abstract

Cited by 21 (0 self)
 Add to MetaCart
Bayesian belief networks or causal probabilistic networks may reach a certain size and complexity where the computations involved in exact probabilistic inference on the network tend to become rather time consuming. Methods for approximating a network by a simpler one allow the computational complexity of probabilistic inference on the network to be reduced at least to some extend. We propose a general framework for approximating Bayesian belief networks based on model simplification by arc removal. The approximation method aims at reducing the computational complexity of probabilistic inference on a network at the cost of introducing a bounded error in the prior and posterior probabilities inferred. We present a practical approximation scheme and give some preliminary results. 1 Introduction Today, more and more applications based on the Bayesian belief network 1 formalism are emerging for reasoning and decision making in problem domains with inherent uncertainty. Current applicati...
Optimization of Pearl's Method of Conditioning and GreedyLike Approximation Algorithms for the Vertex Feedback Set Problem
 Artificial Intelligence
, 1997
"... We show how to find a small loop cutset in a Bayesian network. ..."
Abstract

Cited by 20 (4 self)
 Add to MetaCart
We show how to find a small loop cutset in a Bayesian network.
A.: Node splitting: A scheme for generating upper bounds in bayesian networks
 In: Proceedings of UAI’07. (2007
"... We formulate in this paper the minibucket algorithm for approximate inference in terms of exact inference on an approximate model produced by splitting nodes in a Bayesian network. The new formulation leads to a number of theoretical and practical implications. First, we show that branchandbound s ..."
Abstract

Cited by 11 (3 self)
 Add to MetaCart
We formulate in this paper the minibucket algorithm for approximate inference in terms of exact inference on an approximate model produced by splitting nodes in a Bayesian network. The new formulation leads to a number of theoretical and practical implications. First, we show that branchandbound search algorithms that use minibucket bounds may operate in a drastically reduced search space. Second, we show that the proposed formulation inspires new minibucket heuristics and allows us to analyze existing heuristics from a new perspective. Finally, we show that this new formulation allows minibucket approximations to benefit from recent advances in exact inference, allowing one to significantly increase the reach of these approximations. 1