Results 1  10
of
39
AISBN: An Adaptive Importance Sampling Algorithm for Evidential Reasoning in Large Bayesian Networks
 Journal of Artificial Intelligence Research
, 2000
"... Stochastic sampling algorithms, while an attractive alternative to exact algorithms in very large Bayesian network models, have been observed to perform poorly in evidential reasoning with extremely unlikely evidence. To address this problem, we propose an adaptive importance sampling algorithm, ..."
Abstract

Cited by 69 (4 self)
 Add to MetaCart
Stochastic sampling algorithms, while an attractive alternative to exact algorithms in very large Bayesian network models, have been observed to perform poorly in evidential reasoning with extremely unlikely evidence. To address this problem, we propose an adaptive importance sampling algorithm, AISBN, that shows promising convergence rates even under extreme conditions and seems to outperform the existing sampling algorithms consistently. Three sources of this performance improvement are (1) two heuristics for initialization of the importance function that are based on the theoretical properties of importance sampling in nitedimensional integrals and the structural advantages of Bayesian networks, (2) a smooth learning method for the importance function, and (3) a dynamic weighting function for combining samples from dierent stages of the algorithm. We tested the performance of the AISBN algorithm along with two state of the art general purpose sampling algorithms, lik...
An Optimal Algorithm for Monte Carlo Estimation
, 1995
"... A typical approach to estimate an unknown quantity is to design an experiment that produces a random variable Z distributed in [0; 1] with E[Z] = , run this experiment independently a number of times and use the average of the outcomes as the estimate. In this paper, we consider the case when no a ..."
Abstract

Cited by 53 (4 self)
 Add to MetaCart
A typical approach to estimate an unknown quantity is to design an experiment that produces a random variable Z distributed in [0; 1] with E[Z] = , run this experiment independently a number of times and use the average of the outcomes as the estimate. In this paper, we consider the case when no a priori information about Z is known except that is distributed in [0; 1]. We describe an approximation algorithm AA which, given ffl and ffi, when running independent experiments with respect to any Z, produces an estimate that is within a factor 1 + ffl of with probability at least 1 \Gamma ffi. We prove that the expected number of experiments run by AA (which depends on Z) is optimal to within a constant factor for every Z. An announcement of these results appears in P. Dagum, D. Karp, M. Luby, S. Ross, "An optimal algorithm for MonteCarlo Estimation (extended abstract)", Proceedings of the Thirtysixth IEEE Symposium on Foundations of Computer Science, 1995, pp. 142149 [3]. Section ...
Monte Carlo Model Checking
 In Proc. of Tools and Algorithms for Construction and Analysis of Systems (TACAS 2005), volume 3440 of LNCS
, 2005
"... Abstract. We present MC 2, what we believe to be the first randomized, Monte Carlo algorithm for temporallogic model checking, the classical problem of deciding whether or not a property specified in temporal logic holds of a system specification. Given a specification S of a finitestate system, a ..."
Abstract

Cited by 43 (4 self)
 Add to MetaCart
Abstract. We present MC 2, what we believe to be the first randomized, Monte Carlo algorithm for temporallogic model checking, the classical problem of deciding whether or not a property specified in temporal logic holds of a system specification. Given a specification S of a finitestate system, an LTL (Linear Temporal Logic) formula ϕ, and parameters ɛ and δ, MC 2 takes N = ln(δ) / ln(1 − ɛ) random samples (random walks ending in a cycle, i.e lassos) from the Büchi automaton B = BS × B¬ϕ to decide if L(B) = ∅. Should a sample reveal an accepting lasso l, MC 2 returns false with l as a witness. Otherwise, it returns true and reports that with probability less than δ, pZ < ɛ, where pZ is the expectation of an accepting lasso in B. It does so in time O(N · D) and space O(D), where D is B’s recurrence diameter, using a number of samples N that is optimal to within a constant factor. Our experimental results demonstrate that MC 2 is fast, memoryefficient, and scales very well.
Stochastic local search for bayesian networks
 In Workshop on AI and Statistics
, 1999
"... The paper evaluates empirically the suitability of Stochastic Local Search algorithms (SLS) for nding most probable explanations in Bayesian networks. SLS algorithms (e.g., GSAT, WSAT [16]) have recently proven to be highly e ective in solving complex constraintsatisfaction and satis ability proble ..."
Abstract

Cited by 34 (9 self)
 Add to MetaCart
The paper evaluates empirically the suitability of Stochastic Local Search algorithms (SLS) for nding most probable explanations in Bayesian networks. SLS algorithms (e.g., GSAT, WSAT [16]) have recently proven to be highly e ective in solving complex constraintsatisfaction and satis ability problems which cannot be solved by traditional search schemes. Our experiments investigate the applicability of this scheme to probabilistic optimization problems. Speci cally, we show that algorithms combining hillclimbing steps with stochastic steps (guided by the network's probability distribution) called G+StS, outperform pure hillclimbing search, pure stochastic simulation search, as well as simulated annealing. In addition, variants of G+StS that are augmented on top of alternative approximation methods are shown to be particularly e ective. 1
Complexity results and approximation strategies for map explanations
 Journal of Artificial Intelligence Research
, 2004
"... MAP is the problem of finding a most probable instantiation of a set of variables given evidence. MAP has always been perceived to be significantly harder than the related problems of computing the probability of a variable instantiation (Pr), or the problem of computing the most probable explanatio ..."
Abstract

Cited by 33 (3 self)
 Add to MetaCart
MAP is the problem of finding a most probable instantiation of a set of variables given evidence. MAP has always been perceived to be significantly harder than the related problems of computing the probability of a variable instantiation (Pr), or the problem of computing the most probable explanation (MPE). This paper investigates the complexity of MAP in Bayesian networks. Specifically, we show that MAP is complete for NP PP and provide further negative complexity results for algorithms based on variable elimination. We also show that MAP remains hard even when MPE and Pr become easy. For example, we show that MAP is NPcomplete when the networks are restricted to polytrees, and even then can not be effectively approximated. Given the difficulty of computing MAP exactly, and the difficulty of approximating MAP while providing useful guarantees on the resulting approximation, we investigate best effort approximations. We introduce a generic MAP approximation framework. We provide two instantiations of the framework; one for networks which are amenable to exact inference (Pr), and one for networks for which even exact inference is too hard. This allows MAP approximation on networks that are too complex to even exactly solve the easier problems, Pr and MPE. Experimental results indicate that using these approximation algorithms provides much better solutions than standard techniques, and provide accurate MAP estimates in many cases. 1.
A Survey of Algorithms for RealTime Bayesian Network Inference
 In In the joint AAAI02/KDD02/UAI02 workshop on RealTime Decision Support and Diagnosis Systems
, 2002
"... As Bayesian networks are applied to more complex and realistic realworld applications, the development of more efficient inference algorithms working under realtime constraints is becoming more and more important. This paper presents a survey of various exact and approximate Bayesian network ..."
Abstract

Cited by 32 (2 self)
 Add to MetaCart
As Bayesian networks are applied to more complex and realistic realworld applications, the development of more efficient inference algorithms working under realtime constraints is becoming more and more important. This paper presents a survey of various exact and approximate Bayesian network inference algorithms. In particular, previous research on realtime inference is reviewed. It provides a framework for understanding these algorithms and the relationships between them. Some important issues in realtime Bayesian networks inference are also discussed.
Generation of Random Bayesian Networks with Constraints on Induced Width, with Application to the Average Analysis of dConnectivity, Quasirandom Sampling, and Loopy Propagation
 In Proceedings of the 16th Eureopean Conference on Artificial Intelligence
, 2004
"... We present algorithms for the generation of uniformly distributed Bayesian networks with constraints on induced width. The algorithms use ergodic Markov chains to generate samples, building upon previous algorithms by the authors. The introduction of constraints on induced width leads to more rea ..."
Abstract

Cited by 25 (1 self)
 Add to MetaCart
We present algorithms for the generation of uniformly distributed Bayesian networks with constraints on induced width. The algorithms use ergodic Markov chains to generate samples, building upon previous algorithms by the authors. The introduction of constraints on induced width leads to more realistic results but requires new techniques. We discuss three applications of randomly generated networks: we study the average number of nodes dconnected to a query, the effectiveness of quasirandom samples in approximate inference, and the convergence of loopy propagation for nonextreme distributions.
Probabilistic reasoning as information compression by multiple alignment, unification and search: an introduction and overview
 Journal of Universal Computer Science
, 1996
"... ..."
Penniless propagation in join trees
 International Journal of Intelligent Systems
, 2000
"... This paper presents nonrandom algorithms for approximate computation in Bayesian networks. They are based on the use of probability trees to represent probability potentials, using the KullbackLeibler cross entropy as a measure of the error of the approximation. Different alternatives are presente ..."
Abstract

Cited by 16 (7 self)
 Add to MetaCart
This paper presents nonrandom algorithms for approximate computation in Bayesian networks. They are based on the use of probability trees to represent probability potentials, using the KullbackLeibler cross entropy as a measure of the error of the approximation. Different alternatives are presented and tested in several experiments with difficult propagation problems. The results show how it is possible to find good approximations in short time compared with Hugin algorithm. � 2000 John Wiley & Sons, Inc. 1.