Results 1  10
of
31
MEBN: A Language for FirstOrder Bayesian Knowledge Bases
"... Although classical firstorder logic is the de facto standard logical foundation for artificial intelligence, the lack of a builtin, semantically grounded capability for reasoning under uncertainty renders it inadequate for many important classes of problems. Probability is the bestunderstood and m ..."
Abstract

Cited by 47 (19 self)
 Add to MetaCart
Although classical firstorder logic is the de facto standard logical foundation for artificial intelligence, the lack of a builtin, semantically grounded capability for reasoning under uncertainty renders it inadequate for many important classes of problems. Probability is the bestunderstood and most widely applied formalism for computational scientific reasoning under uncertainty. Increasingly expressive languages are emerging for which the fundamental logical basis is probability. This paper presents MultiEntity Bayesian Networks (MEBN), a firstorder language for specifying probabilistic knowledge bases as parameterized fragments of Bayesian networks. MEBN fragments (MFrags) can be instantiated and combined to form arbitrarily complex graphical probability models. An MFrag represents probabilistic relationships among a conceptually meaningful group of uncertain hypotheses. Thus, MEBN facilitates representation of knowledge at a natural level of granularity. The semantics of MEBN assigns a probability distribution over interpretations of an associated classical firstorder theory on a finite or countably infinite domain. Bayesian inference provides both a proof theory for combining prior knowledge with observations, and a learning theory for refining a representation as evidence accrues. A proof is given that MEBN can represent a probability distribution on interpretations of any finitely axiomatizable firstorder theory.
The Origin of Relation Algebras in the Development and Axiomatization of the Calculus of Relations
, 1991
"... ..."
MEBN: A Logic for OpenWorld Probabilistic Reasoning
 Research Paper
, 2004
"... Uncertainty is a fundamental and irreducible aspect of our knowledge about the world. Probability is the most wellunderstood and widely applied logic for computational scientific reasoning under uncertainty. As theory and practice advance, generalpurpose languages are beginning to emerge for which ..."
Abstract

Cited by 19 (8 self)
 Add to MetaCart
Uncertainty is a fundamental and irreducible aspect of our knowledge about the world. Probability is the most wellunderstood and widely applied logic for computational scientific reasoning under uncertainty. As theory and practice advance, generalpurpose languages are beginning to emerge for which the fundamental logical basis is probability. However, such languages have lacked a logical foundation that fully integrates classical firstorder logic with probability theory. This paper presents such an integrated logical foundation. A formal specification is presented for multientity Bayesian networks (MEBN), a knowledge representation language based on directed graphical probability models. A proof is given that a probability distribution over interpretations of any consistent, finitely axiomatizable firstorder theory can be defined using MEBN. A semantics based on random variables provides a logically coherent foundation for open world reasoning and a means of analyzing tradeoffs between accuracy and computation cost. Furthermore, the underlying Bayesian logic is inherently open, having the ability to absorb new facts about the world, incorporate them into existing theories, and/or modify theories in the light of evidence. Bayesian inference provides both a proof theory for combining prior knowledge with observations, and a learning theory for refining a representation as evidence accrues. The results of this paper provide a logical foundation for the rapidly evolving literature on firstorder Bayesian knowledge representation, and point the way toward Bayesian languages suitable for generalpurpose knowledge representation and computing. Because firstorder Bayesian logic contains classical firstorder logic as a deterministic subset, it is a natural candidate as a universal representation for integrating domain ontologies expressed in languages based on classical firstorder logic or subsets thereof.
Research in Machine Learning: Recent Progress, Classification of Methods and Future Directions
, 1990
"... The last few years have witnessed a remarkable expansion of research in machine learning. The field has gained an unprecedented popularity, several new areas have developed, and some previously established areas have gained new momentum. While symbolic methods, both empirical and knowledgeintensive ..."
Abstract

Cited by 14 (3 self)
 Add to MetaCart
The last few years have witnessed a remarkable expansion of research in machine learning. The field has gained an unprecedented popularity, several new areas have developed, and some previously established areas have gained new momentum. While symbolic methods, both empirical and knowledgeintensive, in particular, inductive concept learning and explanationbased methods, continued to be exceedingly active (Parts 2 and 3 of the book, respectively), subsymbolic approaches, especially neural networks, have experienced tremendous growth (Part 5). Unlike past efforts that concentrated on single learning strategies, the new trend has been to integrate different strategies, and to develop cognitive learning architectures (Part 4). There has been an increasing interest in experimental comparisons of various methods, and in theoretical analyses of learning algorithms. Researchers have been sharing the same data sets, and have applied their techniques to the same problems in order to understand relative merits of different methods. Theoretical investigations have brought new insights into the complexity of learning processes (Part 6).
Impact of semantic heterogeneity on federating databases
 The Computer Journal
, 1997
"... The dif"cult problems in the design of systems which facilitate interoperation and mediation among information sources and their consumers arise from the presence of semantic heterogeneity among the schemas and ontologies supporting the different services. The purpose of this paper is to de ..."
Abstract

Cited by 13 (4 self)
 Add to MetaCart
The dif&quot;cult problems in the design of systems which facilitate interoperation and mediation among information sources and their consumers arise from the presence of semantic heterogeneity among the schemas and ontologies supporting the different services. The purpose of this paper is to develop a taxonomy of semantic heterogeneity, and to describe, taking the perspective of text databases, the conditions under which autonomyrespecting interoperation of different kinds are likely to be feasible. The main conclusion is that interoperation can be based on structured database technology only if the participating organizations communicate among themselves, otherwise the considerations underlying text databases dominate the technology used. 1.
Preference modelling
 State of the Art in Multiple Criteria Decision Analysis
, 2005
"... This paper provides the reader with a presentation of preference modelling fundamental notions as well as some recent results in this field. Preference modelling is an inevitable step in a variety of fields: economy, sociology, psychology, mathematical programming, even medicine, archaeology, and ob ..."
Abstract

Cited by 13 (0 self)
 Add to MetaCart
This paper provides the reader with a presentation of preference modelling fundamental notions as well as some recent results in this field. Preference modelling is an inevitable step in a variety of fields: economy, sociology, psychology, mathematical programming, even medicine, archaeology, and obviously decision analysis. Our notation and some basic definitions, such as those of binary relation, properties and ordered sets, are presented at the beginning of the paper. We start by discussing different reasons for constructing a model or preference. We then go through a number of issues that influence the construction of preference models. Different formalisations besides classical logic such as fuzzy sets and nonclassical logics become necessary. We then present different types of preference structures reflecting the behavior of a decisionmaker: classical, extended and valued ones. It is relevant to have a numerical representation of preferences: functional representations, value functions. The concepts of thresholds and minimal representation are also introduced in this section. In section 7, we briefly explore the concept of deontic logic (logic of preference) and other formalisms associated with &quot;compact representation of preferences &quot; introduced for special purposes. We end the paper with some concluding remarks.
Unpacking meaning from words: A contextcentered approach to computational lexicon design
 Proc CONTEXT 2003
, 2003
"... Abstract. The knowledge representation tradition in computational lexicon design represents words as static encapsulations of purely lexical knowledge. We suggest that this view poses certain limitations on the ability of the lexicon to generate nuanceladen and contextsensitive meanings, because w ..."
Abstract

Cited by 11 (3 self)
 Add to MetaCart
Abstract. The knowledge representation tradition in computational lexicon design represents words as static encapsulations of purely lexical knowledge. We suggest that this view poses certain limitations on the ability of the lexicon to generate nuanceladen and contextsensitive meanings, because word boundaries are obstructive, and the impact of nonlexical knowledge on meaning is unaccounted for. Hoping to address these problematics, we explore a contextcentered approach to lexicon design called a Bubble Lexicon. Inspired by Ross Quillian’s Semantic Memory System, we represent wordconcepts as nodes on a symbolicconnectionist network. In a Bubble Lexicon, a word’s meaning is defined by a dynamically grown contextsensitive bubble; thus giving a more natural account of systematic polysemy. Linguistic assembly tasks such as attribute attachment are made contextsensitive, and the incorporation of general world knowledge improves generative capability. Indicative trials over an implementation of the Bubble Lexicon lends support to our hypothesis that unpacking meaning from predefined word structures is a step toward a more natural handling of context in language. 1
Metaphor in Diagrams
 Darwin College, Univ. of Cambridge
, 1998
"... Modern computer systems routinely present information to the user as a combination of text and diagrammatic images, described as "graphical user interfaces". Practitioners and researchers in HumanComputer Interaction (HCI) generally believe that the value of these diagrammatic representat ..."
Abstract

Cited by 11 (0 self)
 Add to MetaCart
Modern computer systems routinely present information to the user as a combination of text and diagrammatic images, described as "graphical user interfaces". Practitioners and researchers in HumanComputer Interaction (HCI) generally believe that the value of these diagrammatic representations is derived from metaphorical reasoning; they communicate abstract information by depicting a physical situation from which the abstractions can be inferred. This assumption has been prevalent in HCI research for over 20 years, but has seldom been tested experimentally. This thesis analyses the reasons why diagrams are believed to assist with abstract reasoning. It then presents the results of a series of experiments testing the contribution of metaphor to comprehension, problem solving, explanation and memory tasks carried out using a range of different diagrams. The results indicate that explicit metaphors provide surprisingly little benefit for cognitive tasks using diagrams as an external re...
Analogical prediction
 Proceedings of the 9th International Workshop on Inductive Logic Programming, volume 1634 of Lecture Notes in Artificial Intelligence
, 1999
"... Abstract. Inductive Logic Programming (ILP) involves constructing an hypothesis H on the basis of background knowledge B and training examples E. An independent test set is used to evaluate the accuracy of H. This paper concerns an alternative approach called Analogical Prediction (AP). AP takes B, ..."
Abstract

Cited by 9 (0 self)
 Add to MetaCart
Abstract. Inductive Logic Programming (ILP) involves constructing an hypothesis H on the basis of background knowledge B and training examples E. An independent test set is used to evaluate the accuracy of H. This paper concerns an alternative approach called Analogical Prediction (AP). AP takes B, E and then for each test example 〈x, y 〉 forms an hypothesis Hx from B, E, x. Evaluation of AP is based on estimating the probability that Hx(x) = y for a randomly chosen 〈x, y〉. AP has been implemented within CProgol4.4. Experiments in the paper show that on English past tense data AP has significantly higher predictive accuracy on this data than both previously reported results and CProgol in inductive mode. However, on KRK illegal AP does not outperform CProgol in inductive mode. We conjecture that AP has advantages for domains in which a large proportion of the examples must be treated as exceptions with respect to the hypothesis vocabulary. The relationship of AP to analogy and instancebased learning is discussed. Limitations of the given implementation of AP are discussed and improvements suggested. 1