## Nonuniform Dynamic Discretization in Hybrid Networks (1997)

### Cached

### Download Links

- [robotics.stanford.edu]
- [robotics.stanford.edu]
- [sci2s.ugr.es]
- [robotics.stanford.edu]
- [ai.stanford.edu]
- DBLP

### Other Repositories/Bibliography

Venue: | In Proc. UAI |

Citations: | 68 - 3 self |

### BibTeX

@INPROCEEDINGS{Kozlov97nonuniformdynamic,

author = {Alexander Kozlov and Daphne Koller},

title = {Nonuniform Dynamic Discretization in Hybrid Networks},

booktitle = {In Proc. UAI},

year = {1997},

pages = {314--325},

publisher = {Morgan Kaufmann}

}

### Years of Citing Articles

### OpenURL

### Abstract

We consider probabilistic inference in general hybrid networks, which include continuous and discrete variables in an arbitrary topology. We reexamine the question of variable discretization in a hybrid network aiming at minimizing the information loss induced by the discretization. We show that a nonuniform partition across all variables as opposed to uniform partition of each variable separately reduces the size of the data structures needed to represent a continuous function. We also provide a simple but efficient procedure for nonuniform partition. To represent a nonuniform discretization in the computer memory, we introduce a new data structure, which we call a Binary Split Partition (BSP) tree. We show that BSP trees can be an exponential factor smaller than the data structures in the standard uniform discretization in multiple dimensions and show how the BSP trees can be used in the standard join tree algorithm. We show that the accuracy of the inference process can be significa...

### Citations

8888 |
Elements of Information Theory
- Cover, Thomas
- 1991
(Show Context)
Citation Context ...such as those found in a Bayesian network. All treatment in this part is based on the relative entropy or Kullback-Leibler (KL) distance between two probability density functions and [=-=Cover and Thomas, 1991-=-]: s ff (1) as a metric for the error introduced by the discretization. There are many justifications for the use of relative entropy as a distance metric... |

7316 |
Probabilistic reasoning in intelligent systems
- Pearl
- 1988
(Show Context)
Citation Context ...mber of practical systems. Although there exists a number of efficient inference algorithms and implementations for probabilistic reasoning in Bayesian networks with discrete variables (for example, [=-=Pearl, 1988-=-, Lauritzen and Spiegelhalter, 1988, Li and D’Ambrosio, 1994, Dechter, 1996]), few algorithms support efficient inference in hybrid Bayesian networks, Bayesian networks where continuous and discrete v... |

4119 |
Classification and Regression Trees
- Breiman, Friedman, et al.
- 1984
(Show Context)
Citation Context ...lgorithm can be adapted to do inference with the BSP trees, and how the probabilistic 1Recursive partition of multivariate domains have been also used in multivariate regression and machine learning [=-=Breiman et al., 1984-=-, Moore, 1991]. inference steps can be appropriately interleaved with discretization steps. Like most other approximation techniques, our approach targets the discretization to do well on the most lik... |

1322 |
Local computations with probabilities on graphical structures and their application to expert systems
- Lauritzen, Spiegelhalter
- 1988
(Show Context)
Citation Context ...ical systems. Although there exists a number of efficient inference algorithms and implementations for probabilistic reasoning in Bayesian networks with discrete variables (for example, [Pearl, 1988, =-=Lauritzen and Spiegelhalter, 1988-=-, Li and D’Ambrosio, 1994, Dechter, 1996]), few algorithms support efficient inference in hybrid Bayesian networks, Bayesian networks where continuous and discrete variables are intermixed. Exact prob... |

292 | Bucket elimination: A unifying framework for probabilistic inference
- Dechter
- 1996
(Show Context)
Citation Context ...ference algorithms and implementations for probabilistic reasoning in Bayesian networks with discrete variables (for example, [Pearl, 1988, Lauritzen and Spiegelhalter, 1988, Li and D’Ambrosio, 1994, =-=Dechter, 1996-=-]), few algorithms support efficient inference in hybrid Bayesian networks, Bayesian networks where continuous and discrete variables are intermixed. Exact probabilistic inference in hybrid networks c... |

173 |
Graphical models for associations between variables, some of which are qualitative and some
- Lauritzen, Wermuth
- 1989
(Show Context)
Citation Context ...d class of continuous functions. For example, one of the hybrid Bayesian network classes where exact probabilistic inference is possible are networks with Conditional Gaussian (CG) density functions [=-=Lauritzen and Wermuth, 1989-=-, Lauritzen, 1992, Olesen, 1993]. Probabilistic inference in these networks is polynomial in the number of continuous variables. However, the CG limitations on the dependencies between variables obstr... |

143 | Propagation of probabilities, means, and variances in mixed graphical association models
- Lauritzen
- 1992
(Show Context)
Citation Context ...ns. For example, one of the hybrid Bayesian network classes where exact probabilistic inference is possible are networks with Conditional Gaussian (CG) density functions [Lauritzen and Wermuth, 1989, =-=Lauritzen, 1992-=-, Olesen, 1993]. Probabilistic inference in these networks is polynomial in the number of continuous variables. However, the CG limitations on the dependencies between variables obstruct the applicati... |

81 |
Variable resolution dynamic programming: Efficiently learning action maps in multivariate real-valued state-spaces
- Moore
- 1991
(Show Context)
Citation Context ...d to do inference with the BSP trees, and how the probabilistic 1Recursive partition of multivariate domains have been also used in multivariate regression and machine learning [Breiman et al., 1984, =-=Moore, 1991-=-]. inference steps can be appropriately interleaved with discretization steps. Like most other approximation techniques, our approach targets the discretization to do well on the most likely scenarios... |

51 |
Hierarchical data structures and algorithms for computer graphics
- Samet, Webber
- 1988
(Show Context)
Citation Context ... Binary Split Partition (BSP) tree. A BSP tree represents a recursive binary partition of a function domain and is similar to the quadtrees or octrees used in graphics for representing space objects [=-=Samet and Webber, 1988-=-].1 In a BSP tree, we restrict the partitions of the multidimensional domains to binary splits by a plane orthogonal to one of the coordinate axes. We show that, for a given number of partitions, BSP ... |

42 | Efficient inference in Bayes networks as a combinatorial optimization problem - Li, D’Ambrosio - 1994 |

21 |
Implementation of continuous bayesian networks using sums of weighted gaussians
- Driver, Morrel
- 1995
(Show Context)
Citation Context ...extension of the previous technique is to decompose an arbitrary conditional probability distribution into several CG distributions, and to represent continuous functions as the sums of CG functions [=-=Driver and Morrel, 1995-=-, Alag and Agogino, 1996]. The price one pays is a fast growth of the number of terms in the sums during probabilistic inference. In a join tree, for example, each time a clique potential is multiplie... |

19 | Causal probabilistic networks with both discrete and continuous varibales
- Olesen
- 1993
(Show Context)
Citation Context ...one of the hybrid Bayesian network classes where exact probabilistic inference is possible are networks with Conditional Gaussian (CG) density functions [Lauritzen and Wermuth, 1989, Lauritzen, 1992, =-=Olesen, 1993-=-]. Probabilistic inference in these networks is polynomial in the number of continuous variables. However, the CG limitations on the dependencies between variables obstruct the application of hybrid n... |

8 | Inference using message propagation and topology transformation vector gaussian continuous networks
- Alag, Agogino
- 1996
(Show Context)
Citation Context ... technique is to decompose an arbitrary conditional probability distribution into several CG distributions, and to represent continuous functions as the sums of CG functions [Driver and Morrel, 1995, =-=Alag and Agogino, 1996-=-]. The price one pays is a fast growth of the number of terms in the sums during probabilistic inference. In a join tree, for example, each time a clique potential is multiplied by a message, the numb... |