#### DMCA

## Automated Refinement of Bayes Networks’ Parameters based on Test Ordering Constraints

### Cached

### Download Links

Citations: | 2 - 1 self |

### Citations

767 |
A theory and methodology of inductive learning
- Michalski
- 1983
(Show Context)
Citation Context ...o generalize from limited experience. Machine Learning has seen numerous approaches to learning task performance by imitation, going back to some of the approaches to inductive learning from examples =-=[14]-=-. Of particular interest are problemsolving tasks that use a model to infer the source, or cause of a problem from a sequence of investigatory steps or tests. The specific example we adopt is a diagno... |

174 |
Information Value Theory
- Howard
- 1966
(Show Context)
Citation Context ... which we determine constraints. The nature of these constraints, as shown herein, is derived from the value of the tests to distinguish causes, a value referred to informally as value of information =-=[10]-=-. It is the effect of these novel constraints on network parameter learning that is elucidated in this paper. ∗ J. M. Agosta is no longer affiliated with Intel Corporation 1Conventional statistical l... |

151 | Fundamental concepts of qualitative probabilistic networks
- Wellman
- 1990
(Show Context)
Citation Context ...s based on some domain knowledge is a way of pruning this search space and learning the parameters more efficiently, both in terms of data needed and time required. Qualitative probabilistic networks =-=[17]-=- allow qualitative constraints on the parameter space to be specified by experts. For instance, the influence of one variable on another, or the combined influence of multiple variables on another var... |

85 | Decision-theoretic troubleshooting
- Heckerman, Breese, et al.
- 1995
(Show Context)
Citation Context ...ble tests at each step, that is, the ability of a test to distinguish among the possible causes. One possible implementation with which to carry out this process, the one we apply, is a Bayes network =-=[9]-=-. As with all model-based approaches, provisioning an adequate model can be daunting, resulting in a “knowledge elicitation bottleneck.” A recent approach for easing the bottleneck grew out of the rea... |

73 | Causal independence for probability assessment and inference using Bayesian networks
- HECKERMAN, BREESE
- 1995
(Show Context)
Citation Context ...iply-connected components. The test variable distributions Pr(T |C) incorporate the further modeling assumption of Independence of Causal Influence, the most familiar example being the Noisy-Or model =-=[8]-=-. To keep the exposition simple, we assume that all variables are binary and that conditional distributions are parametrized by the Noisy-Or; however, the algorithms described in the rest of the paper... |

60 |
der Gaag. Elicitation of probabilities for belief networks: combining qualitative and quantitative information
- Druzdzel, van
- 1995
(Show Context)
Citation Context ... qualitative constraints on the parameter space to be specified by experts. For instance, the influence of one variable on another, or the combined influence of multiple variables on another variable =-=[5]-=- leads to linear inequalities on the parameters. Wittig and Jameson [18] explain how to transform the likelihood of violating qualitative constraints into a penalty term to adjust maximum likelihood, ... |

54 | Learning from measurements in exponential families
- Liang, Jordan, et al.
- 2009
(Show Context)
Citation Context ...ctions. We use Beta priors on the parameters, which can easily be extended to Dirichlet priors like previous work. We incorporate constraints in an augmented Bayesian network, similar to Liang et al. =-=[11]-=-, though their constraints are on model predictions as opposed to ours which are on the parameters of the network. Finally, we also use the notion of probabilistic constraints to handle potential mist... |

34 | Bayesian network learning with parameter constraints
- Niculescu, Mitchell, et al.
(Show Context)
Citation Context ...Various proposals have been made that exploit such constraints. Altendorf et al. [2] provide an approximate technique based on constrained convex optimization for parameter learning. Niculescu et al. =-=[15]-=- also provide a technique based on constrained optimization with closed form solutions for different classes of constraints. Feelders [6] provides an alternate method based on isotonic regression whil... |

31 | Learning from sparse data by exploiting monotonicity constraints
- Altendorf, Restificar, et al.
- 2005
(Show Context)
Citation Context ...itative constraints include some parameters being larger than others, bounded in a range, within ϵ of each other, etc. Various proposals have been made that exploit such constraints. Altendorf et al. =-=[2]-=- provide an approximate technique based on constrained convex optimization for parameter learning. Niculescu et al. [15] also provide a technique based on constrained optimization with closed form sol... |

18 | Exploiting qualitative knowledge in the learning of conditional probabilities of bayesian networks
- Wittig, Jameson
- 2000
(Show Context)
Citation Context ...rts. For instance, the influence of one variable on another, or the combined influence of multiple variables on another variable [5] leads to linear inequalities on the parameters. Wittig and Jameson =-=[18]-=- explain how to transform the likelihood of violating qualitative constraints into a penalty term to adjust maximum likelihood, which allows gradient ascent and Expectation Maximization (EM) to take i... |

15 | Learning Bayesian network parameters under incomplete data with domain knowledge
- Liao, Ji
(Show Context)
Citation Context ...a technique based on constrained optimization with closed form solutions for different classes of constraints. Feelders [6] provides an alternate method based on isotonic regression while Liao and Ji =-=[12]-=- combine gradient descent with EM. de Campos and Ji [4] also use constrained convex optimization, however, they use Dirichlet priors on the parameters to incorporate any additional knowledge. Mao and ... |

11 |
Learning From What You Don’t Observe
- Peot, Shachter
- 1998
(Show Context)
Citation Context ...ous work is on the type of constraints. Our constraints do not need to be explicitly specified by an expert. Instead, we passively observe the expert and learn from what choices are made and not made =-=[16]-=-. Furthermore, as we shall show later, our constraints are non-convex, preventing the direct application of existing techniques that assume linear or convex functions. We use Beta priors on the parame... |

4 | Evaluation results for a query-based diagnostics application
- Agosta, Khan, et al.
- 2010
(Show Context)
Citation Context ...for easing the bottleneck grew out of the realization that the best time to gain an expert’s insight into the model structure is during the diagnostic process. Recent work in “QueryBased Diagnostics” =-=[1]-=- demonstrated a way to improve model quality by merging model use and model building into a single process. More precisely the expert can take steps to modify the network structure to add or remove no... |

4 | Fast information value for graphical models
- Anderson, Moore
- 2005
(Show Context)
Citation Context ...ires a model equivalent to a partially observable Markov decision process. Instead, VOI is commonly approximated by a greedy computation of the Mutual Information between a test and the set of causes =-=[3]-=-. In this case, it is easy to show that Mutual Information is in turn well approximated to second order by the Gini impurity [7] as shown in Equation 1. GI(C|T ) = ∑ [ ∑ ] Pr(T = t) Pr(C = c|T = t)(1 ... |

4 | A new parameter learning method for Bayesian networks with qualitative influences
- Feelders
- 2007
(Show Context)
Citation Context ... convex optimization for parameter learning. Niculescu et al. [15] also provide a technique based on constrained optimization with closed form solutions for different classes of constraints. Feelders =-=[6]-=- provides an alternate method based on isotonic regression while Liao and Ji [12] combine gradient descent with EM. de Campos and Ji [4] also use constrained convex optimization, however, they use Dir... |

4 | Domain knowledge uncertainty and probabilistic parameter constraints
- Mao, Lebanon
- 2009
(Show Context)
Citation Context ... gradient descent with EM. de Campos and Ji [4] also use constrained convex optimization, however, they use Dirichlet priors on the parameters to incorporate any additional knowledge. Mao and Lebanon =-=[13]-=- also use Dirichlet priors, but they use probabilistic constraints to allow inaccuracies in the specification of the constraints. A major difference between our technique and previous work is on the t... |

2 |
de Campos and Qiang Ji. Improving Bayesian network parameter learning using constraints
- Cassio
- 2008
(Show Context)
Citation Context ...d form solutions for different classes of constraints. Feelders [6] provides an alternate method based on isotonic regression while Liao and Ji [12] combine gradient descent with EM. de Campos and Ji =-=[4]-=- also use constrained convex optimization, however, they use Dirichlet priors on the parameters to incorporate any additional knowledge. Mao and Lebanon [13] also use Dirichlet priors, but they use pr... |

1 |
A procedure to test the suitability of a factor for stratification in estimating diversity
- Gil, Gil
- 1991
(Show Context)
Citation Context ...putation of the Mutual Information between a test and the set of causes [3]. In this case, it is easy to show that Mutual Information is in turn well approximated to second order by the Gini impurity =-=[7]-=- as shown in Equation 1. GI(C|T ) = ∑ [ ∑ ] Pr(T = t) Pr(C = c|T = t)(1 − Pr(C = c|T = t)) (1) t c We will use the Gini measure as a surrogate for VOI, as a way to rank the best next test in the diagn... |