Results 1 
7 of
7
Learning Module Networks
, 2003
"... Methods for learning Bayesian networks can discover dependency structure between observed variables. Although these methods are useful in many applications, they run into computational and statistical problems in domains that involve a large number of variables. In this paper, we ..."
Abstract

Cited by 44 (4 self)
 Add to MetaCart
Methods for learning Bayesian networks can discover dependency structure between observed variables. Although these methods are useful in many applications, they run into computational and statistical problems in domains that involve a large number of variables. In this paper, we
MEBN: A Logic for OpenWorld Probabilistic Reasoning
 Research Paper
, 2004
"... Uncertainty is a fundamental and irreducible aspect of our knowledge about the world. Probability is the most wellunderstood and widely applied logic for computational scientific reasoning under uncertainty. As theory and practice advance, generalpurpose languages are beginning to emerge for which ..."
Abstract

Cited by 20 (8 self)
 Add to MetaCart
Uncertainty is a fundamental and irreducible aspect of our knowledge about the world. Probability is the most wellunderstood and widely applied logic for computational scientific reasoning under uncertainty. As theory and practice advance, generalpurpose languages are beginning to emerge for which the fundamental logical basis is probability. However, such languages have lacked a logical foundation that fully integrates classical firstorder logic with probability theory. This paper presents such an integrated logical foundation. A formal specification is presented for multientity Bayesian networks (MEBN), a knowledge representation language based on directed graphical probability models. A proof is given that a probability distribution over interpretations of any consistent, finitely axiomatizable firstorder theory can be defined using MEBN. A semantics based on random variables provides a logically coherent foundation for open world reasoning and a means of analyzing tradeoffs between accuracy and computation cost. Furthermore, the underlying Bayesian logic is inherently open, having the ability to absorb new facts about the world, incorporate them into existing theories, and/or modify theories in the light of evidence. Bayesian inference provides both a proof theory for combining prior knowledge with observations, and a learning theory for refining a representation as evidence accrues. The results of this paper provide a logical foundation for the rapidly evolving literature on firstorder Bayesian knowledge representation, and point the way toward Bayesian languages suitable for generalpurpose knowledge representation and computing. Because firstorder Bayesian logic contains classical firstorder logic as a deterministic subset, it is a natural candidate as a universal representation for integrating domain ontologies expressed in languages based on classical firstorder logic or subsets thereof.
MultiEntity Bayesian Networks Without MultiTears
"... An introduction is provided to MultiEntity Bayesian Networks (MEBN), a logic system that integrates First Order Logic (FOL) with Bayesian probability theory. MEBN extends ordinary Bayesian networks to allow representation of graphical models with repeated substructures. Knowledge is encoded as a c ..."
Abstract

Cited by 8 (6 self)
 Add to MetaCart
An introduction is provided to MultiEntity Bayesian Networks (MEBN), a logic system that integrates First Order Logic (FOL) with Bayesian probability theory. MEBN extends ordinary Bayesian networks to allow representation of graphical models with repeated substructures. Knowledge is encoded as a collection of Bayesian network fragments (MFrags) that can be instantiated and combined to form highly complex situationspecific Bayesian networks. A MEBN theory (MTheory) implicitly represents a joint probability distribution over possibly unbounded numbers of hypotheses, and uses Bayesian learning to refine a knowledge base as observations accrue. MEBN provides a logical foundation for the emerging collection of highly expressive probabilitybased languages. A running example illustrates the representation and reasoning power of the MEBN formalism.
FirstOrder Bayesian Logic
, 2005
"... Uncertainty is a fundamental and irreducible aspect of our knowledge about the world. Until recently, classical firstorder logic has reigned as the de facto standard logical foundation for artificial intelligence. The lack of a builtin, semantically grounded capability for reasoning under uncertai ..."
Abstract

Cited by 8 (3 self)
 Add to MetaCart
Uncertainty is a fundamental and irreducible aspect of our knowledge about the world. Until recently, classical firstorder logic has reigned as the de facto standard logical foundation for artificial intelligence. The lack of a builtin, semantically grounded capability for reasoning under uncertainty renders classical firstorder logic inadequate for many important classes of problems. Generalpurpose languages are beginning to emerge for which the fundamental logical basis is probability. Increasingly expressive probabilistic languages demand a theoretical foundation that fully integrates classical firstorder logic and probability. In firstorder Bayesian logic (FOBL), probability distributions are defined over interpretations of classical firstorder axiom systems. Predicates and functions of a classical firstorder theory correspond to a random variables in the corresponding firstorder Bayesian theory. This is a natural correspondence, given that random variables are formalized in mathematical statistics as measurable functions on a probability space. A formal system called MultiEntity Bayesian Networks (MEBN) is presented for composing distributions on interpretations by instantiating and combining parameterized fragments of directed graphical models. A construction is given of a MEBN theory that assigns a nonzero
Knowledge and Data Fusion in Probabilistic Networks
, 2003
"... Intelligent systems use internal representations to mediate the transformation from percepts to goaldirected actions. Intelligent learning agents use environmental feedback to modify their internal representations to improve performance over time and adapt to changing circumstances. All learning in ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
Intelligent systems use internal representations to mediate the transformation from percepts to goaldirected actions. Intelligent learning agents use environmental feedback to modify their internal representations to improve performance over time and adapt to changing circumstances. All learning involves knowledgedata fusion to some degree. Bayesian learning, the focus of this paper, is specifically designed to incorporate both expert knowledge and observations. We use the term "data" to refer both to collections of cases and to statements about the domain provided by experts and knowledge engineers and used to construct internal representations. The term "knowledge" refers to the internal representation itself, which we take to be a collection of Bayesian network fragments. We describe a prequential learning agent architecture for bounded rational action and learning under uncertainty. We describe recent extensions to Bayesian networks that provide sufficient representation power for expressing general prequential learning agent models. We describe tools and techniques to support a process in which models are constructed and refined using a combination of inputs from experts and environmental feedback. KEY WORDS: Bayesian Networks, Bayesian Learning, Graphical Probabilistic Models, Knowledge Elicitation, Object Oriented Bayesian Networks, Prequential Probability Machine Learning MCMC Issue 1 7/1/01 1.
Quantum Physical Symbol Systems
, 2003
"... Today's theories of computing and machine learning developed within a nineteenthcentury mechanistic mindset. Although digital computers would be impossible without quantum physics, their physical and logical architecture is based on the view of a computer as an automaton executing preprogrammed ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Today's theories of computing and machine learning developed within a nineteenthcentury mechanistic mindset. Although digital computers would be impossible without quantum physics, their physical and logical architecture is based on the view of a computer as an automaton executing preprogrammed sequences of operations exactly as instructed. Recent innovations in representations and algorithms suggest that a shift in viewpoint may be occurring. In the newly emerging view, a computer program executes a stochastic process that transforms inputs and internal state into a sequence of trial solutions, as it evolves toward an improved world model and better task performance. A full realization of this vision requires a new logic for computing that incorporates learning from experience as an intrinsic part of the logic, and that permits full exploitation of the quantum nature of the physical world. Knowledge representation languages based on graphical probability and decision models have now attained sufficient expressive power to support general computing applications. At the same time, research is progressing rapidly on hardware and software architectures for quantum computing. It is hypothesized that a sufficiently expressive probabilistic logic executing on quantum hardware could perform Bayesian learning and decisiontheoretic reasoning with efficiency far surpassing that of classical computers. Moreover, a
Ontologybased generation of Object Oriented Bayesian Networks
"... Probabilistic Graphical Models (PGMs) are powerful tools for representing and reasoning under uncertainty. Although useful in several domains, PGMs suffer from their building phase known to be mostly an NPhard problem which can limit in some extent their application, especially in real world applic ..."
Abstract
 Add to MetaCart
Probabilistic Graphical Models (PGMs) are powerful tools for representing and reasoning under uncertainty. Although useful in several domains, PGMs suffer from their building phase known to be mostly an NPhard problem which can limit in some extent their application, especially in real world applications. Ontologies, from their side, provide a body of structured knowledge characterized by its semantic richness. This paper proposes to harness ontologies representation capabilities in order to enrich the process of PGMs building. We are in particular interested in object oriented Bayesian networks (OOBNs) which are an extension of standard Bayesian networks (BNs) using the object paradigm. We show how the semantical richness of ontologies might be a potential solution to address the challenging field of structural learning of OOBNs while minimizing experts involvement which is not always obvious to obtain. More precisely, we propose to set up a set of mapping rules allowing us to generate a prior OOBN structure by morphing an ontology related to the problem under study to be used as a starting point to the global OOBN building algorithm. 1