Results 1 
4 of
4
Constraint relaxation for learning the structure of Bayesian networks
, 2009
"... This paper introduces constraint relaxation, a new strategy for learning the structure of Bayesian networks. Constraint relaxation identifies and “relaxes ” possibly inaccurate independence constraints on the structure of the model. We describe a heuristic algorithm for constraint relaxation that co ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
This paper introduces constraint relaxation, a new strategy for learning the structure of Bayesian networks. Constraint relaxation identifies and “relaxes ” possibly inaccurate independence constraints on the structure of the model. We describe a heuristic algorithm for constraint relaxation that combines greedy search in the space of undirected skeletons with edge orientation based on the constraints. This approach produces significant improvements in the structural accuracy of the learned models compared to four wellknown structure learning algorithms in an empirical evaluation using data sampled from both realworld and randomly generated networks. 1
Conservative IndependenceBased Causal Structure Learning in Absence of Adjacency Faithfulness
"... This paper presents an extension to the Conservative PC algorithm which is able to detect violations of adjacency faithfulness under causal sufficiency and triangle faithfulness. Violations can be characterized by pseudoindependent relations and equivalent edges, both generating a pattern of condit ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
This paper presents an extension to the Conservative PC algorithm which is able to detect violations of adjacency faithfulness under causal sufficiency and triangle faithfulness. Violations can be characterized by pseudoindependent relations and equivalent edges, both generating a pattern of conditional independencies that cannot be modeled faithfully. Both cases lead to uncertainty about specific parts of the skeleton of the causal graph. These ambiguities are modeled by an fpattern. We prove that our Adjacency Conservative PC algorithm is able to correctly learn the fpattern. We argue that the solution also applies for the finite sample case if we accept that only strong edges can be identified. Experiments based on simulations and the ALARM benchmark model show that the rate of false edge removals is significantly reduced, at the expense of uncertainty on the skeleton and a higher sensitivity for accidental correlations. Keywords: 1.
Improving Accuracy of ConstraintBased Structure Learning
"... Hybrid algorithms for learning the structure of Bayesian networks combine techniques from both the constraintbased and searchandscore paradigms of structure learning. One class of hybrid approaches uses a constraintbased algorithm to learn an undirected skeleton identifying edges that should appea ..."
Abstract
 Add to MetaCart
Hybrid algorithms for learning the structure of Bayesian networks combine techniques from both the constraintbased and searchandscore paradigms of structure learning. One class of hybrid approaches uses a constraintbased algorithm to learn an undirected skeleton identifying edges that should appear in the final network. This skeleton is used to constrain the model space considered by a searchandscore algorithm to orient the edges and produce a final model structure. At small sample sizes, the performance of models learned using this hybrid approach do not achieve likelihood as high as models learned by unconstrained search. Low performance is a result of errors made by the skeleton identification algorithm, particularly false negative errors, which lead to an overconstrained search space. These errors are often attributed to “noisy” hypothesis tests that are run during skeleton identification. However, at least three specific sources of error have been identified in the literature: unsuitable hypothesis tests, lowpower hypothesis tests, and unexplained dseparation. No previous work has considered these sources of error in combination. We determine the relative importance of each source individually and in combination. We identify that lowpower tests are the primary source of false negative errors, and show that these errors can be corrected by a novel application of statistical power analysis. The result is a new hybrid algorithm for learning the structure of Bayesian networks which produces models with equivalent likelihood to models produced by unconstrained greedy search, using only a fraction of the time. 1