Results 1  10
of
53
Mean Field Theory for Sigmoid Belief Networks
 Journal of Artificial Intelligence Research
, 1996
"... We develop a mean field theory for sigmoid belief networks based on ideas from statistical mechanics. ..."
Abstract

Cited by 147 (13 self)
 Add to MetaCart
We develop a mean field theory for sigmoid belief networks based on ideas from statistical mechanics.
Interpreting Bayesian Logic Programs
 PROCEEDINGS OF THE WORKINPROGRESS TRACK AT THE 10TH INTERNATIONAL CONFERENCE ON INDUCTIVE LOGIC PROGRAMMING
, 2001
"... Various proposals for combining first order logic with Bayesian nets exist. We introduce the formalism of Bayesian logic programs, which is basically a simplification and reformulation of Ngo and Haddawys probabilistic logic programs. However, Bayesian logic programs are sufficiently powerful to ..."
Abstract

Cited by 129 (8 self)
 Add to MetaCart
(Show Context)
Various proposals for combining first order logic with Bayesian nets exist. We introduce the formalism of Bayesian logic programs, which is basically a simplification and reformulation of Ngo and Haddawys probabilistic logic programs. However, Bayesian logic programs are sufficiently powerful to represent essentially the same knowledge in a more elegant manner. The elegance is illustrated by the fact that they can represent both Bayesian nets and definite clause programs (as in "pure" Prolog) and that their kernel in Prolog is actually an adaptation of an usual Prolog metainterpreter.
Modeling relations and their mentions without labeled text
 In Proceedings of the European Conference on Machine Learning and Knowledge Discovery in Databases: Part III
, 2010
"... Abstract. Several recent works on relation extraction have been applying the distant supervision paradigm: instead of relying on annotated text to learn how to predict relations, they employ existing knowledge bases (KBs) as source of supervision. Crucially, these approaches are trained based on the ..."
Abstract

Cited by 70 (3 self)
 Add to MetaCart
(Show Context)
Abstract. Several recent works on relation extraction have been applying the distant supervision paradigm: instead of relying on annotated text to learn how to predict relations, they employ existing knowledge bases (KBs) as source of supervision. Crucially, these approaches are trained based on the assumption that each sentence which mentions the two related entities is an expression of the given relation. Here we argue that this leads to noisy patterns that hurt precision, in particular if the knowledge base is not directly related to the text we are working with. We present a novel approach to distant supervision that can alleviate this problem based on the following two ideas: First, we use a factor graph to explicitly model the decision whether two entities are related, and the decision whether this relation is mentioned in a given sentence; second, we apply constraintdriven semisupervision to train this model without any knowledge about which sentences express the relations in our training KB. We apply our approach to extract relations from the New York Times corpus and use Freebase as knowledge base. When compared to a stateoftheart approach for relation extraction under distant supervision, we achieve 31 % error reduction. 1
Graphical Models for Genetic Analyses
 STATISTTICAL SCIENCE
, 2003
"... This paper introduces graphical models as a natural environment in which to formulate and solve problems in genetics and related areas. Particular emphasis is given to the relationships among various local computation algorithms which have been developed within the hitherto mostly separate areas o ..."
Abstract

Cited by 36 (2 self)
 Add to MetaCart
This paper introduces graphical models as a natural environment in which to formulate and solve problems in genetics and related areas. Particular emphasis is given to the relationships among various local computation algorithms which have been developed within the hitherto mostly separate areas of graphical models and genetics. The potential of graphical models is explored and illustrated through a number of example applications where the genetic element is substantial or dominating.
Decomposing Bayesian Networks: Triangulation of Moral Graph with Genetic Algorithms
 Statistics and Computing
, 1997
"... In this paper we consider the optimal decomposition of Bayesian networks. More concretely, we examine  empirically , the applicability of genetic algorithms to the problem of the triangulation of moral graphs. This problem constitutes the only difficult step in the evidence propagation algorithm ..."
Abstract

Cited by 27 (4 self)
 Add to MetaCart
In this paper we consider the optimal decomposition of Bayesian networks. More concretely, we examine  empirically , the applicability of genetic algorithms to the problem of the triangulation of moral graphs. This problem constitutes the only difficult step in the evidence propagation algorithm of Lauritzen and Spiegelhalter (1988) and is known to be NPhard (Wen, 1991). We carry out experiments with distinct crossover and mutation operators and with different population sizes, mutation rates and selection biasses. The results are analyzed statistically. They turn out to improve the results obtained with most other known triangulation methods (Kjaerulff, 1990) and are comparable to the ones obtained with simulated annealing (Kjaerulff, 1990; Kjaerulff, 1992). Keywords: Bayesian networks, genetic algorithms, optimal decomposition, graph triangulation, moral graph, NPhard problems, statistical analysis. 1 Introduction The Bayesian networks constitute a reasoning method based on p...
Probabilistic expert systems for forensic inference from genetic markers
 Scandinavian Journal of Statistics
, 2002
"... ABSTRACT. We present a number of real and fictitious examples in illustration of a new approach to analysing complex cases of forensic identification inference. This is effected by careful restructuring of the relevant pedigrees as a Probabilistic Expert System. Existing software can then be used to ..."
Abstract

Cited by 23 (7 self)
 Add to MetaCart
(Show Context)
ABSTRACT. We present a number of real and fictitious examples in illustration of a new approach to analysing complex cases of forensic identification inference. This is effected by careful restructuring of the relevant pedigrees as a Probabilistic Expert System. Existing software can then be used to perform the required inferential calculations. Specific complications which are readily handled by this approach include missing data on one or more relevant individuals, and genetic mutation. The method is particularly valuable for disputed paternity cases, but applies also to certain criminal cases.
Addressing the concerns of the Lacks Family: Quantification of kin genomic privacy
 Proceedings of 20th ACM Conference on Computer and Communications Security (CCS
, 2013
"... The rapid progress in humangenome sequencing is leading to a high availability of genomic data. This data is notoriously very sensitive and stable in time. It is also highly correlated among relatives. A growing number of genomes are becoming accessible online (e.g., because of leakage, or after th ..."
Abstract

Cited by 23 (9 self)
 Add to MetaCart
(Show Context)
The rapid progress in humangenome sequencing is leading to a high availability of genomic data. This data is notoriously very sensitive and stable in time. It is also highly correlated among relatives. A growing number of genomes are becoming accessible online (e.g., because of leakage, or after their posting on genomesharing websites). What are then the implications for kin genomic privacy? We formalize the problem and detail an efficient reconstruction attack based on graphical models and belief propagation. With this approach, an attacker can infer the genomes of the relatives of an individual whose genome is observed, relying notably on Mendel’s Laws and statistical relationships between the nucleotides (on the DNA sequence). Then, to quantify the level of genomic privacy as a result of the proposed inference attack, we discuss possible definitions of genomic privacy metrics. Genomic data reveals Mendelian diseases and the likelihood of developing degenerative diseases such as Alzheimer’s. We also introduce the quantification of health privacy, specifically the measure of how well the predisposition to a disease is concealed from an attacker. We evaluate our approach on actual genomic data from a pedigree and show the threat extent by combining data gathered from a genomesharing website and from an online social network.
Cutset sampling for Bayesian networks
 Journal of Artificial Intelligence Research
"... The paper presents a new sampling methodology for Bayesian networks that samples only a subset of variables and applies exact inference to the rest. Cutset sampling is a network structureexploiting application of the RaoBlackwellisation principle to sampling in Bayesian networks. It improves conve ..."
Abstract

Cited by 22 (7 self)
 Add to MetaCart
The paper presents a new sampling methodology for Bayesian networks that samples only a subset of variables and applies exact inference to the rest. Cutset sampling is a network structureexploiting application of the RaoBlackwellisation principle to sampling in Bayesian networks. It improves convergence by exploiting memorybased inference algorithms. It can also be viewed as an anytime approximation of the exact cutsetconditioning algorithm developed by Pearl. Cutset sampling can be implemented efficiently when the sampled variables constitute a loopcutset of the Bayesian network and, more generally, when the induced width of the network’s graph conditioned on the observed sampled variables is bounded by a constant w. We demonstrate empirically the benefit of this scheme on a range of benchmarks. 1.
CycleCutset sampling for Bayesian networks
 In The Canadian AI Conference, (CAAI’03
, 2003
"... The paper presents a new sampling methodology for Bayesian networks called cutset sampling that samples only a subset of the variables and applies exact inference for the others. We show that this approach can be implemented eciently when the sampled variables constitute a cyclecutset for the B ..."
Abstract

Cited by 17 (7 self)
 Add to MetaCart
(Show Context)
The paper presents a new sampling methodology for Bayesian networks called cutset sampling that samples only a subset of the variables and applies exact inference for the others. We show that this approach can be implemented eciently when the sampled variables constitute a cyclecutset for the Bayesian network and otherwise it is exponential in the inducedwidth of the network's graph, whose sampled variables are removed. Cutset sampling is an instance of the well known RaoBlakwellisation technique for variance reduction investigated in [5, 2, 16]. Moreover, the proposed scheme extends standard sampling methods to nonergodic networks with ergodic subspaces. Our empirical results con rm those expectations and show that cycle cutset sampling is superior to Gibbs sampling for a variety of benchmarks, yielding a simple, yet powerful sampling scheme.