Results 11  20
of
100
FLORA2: A RuleBased Knowledge Representation and Inference Infrastructure for the Semantic Web
 In Second International Conference on Ontologies, Databases and Applications of Semantics (ODBASE
, 2003
"... Abstract. Flora2 is a rulebased objectoriented knowledge base system designed for a variety of automated tasks on the Semantic Web, ranging from metadata management to information integration to intelligent agents. The Flora2 system integrates Flogic, HiLog, and Transaction Logic into a cohere ..."
Abstract

Cited by 55 (5 self)
 Add to MetaCart
Abstract. Flora2 is a rulebased objectoriented knowledge base system designed for a variety of automated tasks on the Semantic Web, ranging from metadata management to information integration to intelligent agents. The Flora2 system integrates Flogic, HiLog, and Transaction Logic into a coherent knowledge representation and inference language. The result is a flexible and natural framework that combines rulebased and objectoriented paradigms. This paper discusses the principles underlying the design of the Flora2 system and describes its salient features, including metaprogramming, reification, logical database updates, encapsulation, and support for dynamic modules. 1
Current Approaches to Handling Imperfect Information in Data and Knowledge Bases
, 1996
"... This paper surveys methods for representing and reasoning with imperfect information. It opens with an attempt to classify the different types of imperfection that may pervade data, and a discussion of the sources of such imperfections. The classification is then used as a framework for considering ..."
Abstract

Cited by 52 (1 self)
 Add to MetaCart
This paper surveys methods for representing and reasoning with imperfect information. It opens with an attempt to classify the different types of imperfection that may pervade data, and a discussion of the sources of such imperfections. The classification is then used as a framework for considering work that explicitly concerns the representation of imperfect information, and related work on how imperfect information may be used as a basis for reasoning. The work that is surveyed is drawn from both the field of databases and the field of artificial intelligence. Both of these areas have long been concerned with the problems caused by imperfect information, and this paper stresses the relationships between the approaches developed in each.
A Parametric Approach to Deductive Databases with Uncertainty
, 1997
"... Numerous frameworks have been proposed in recent years for deductive databases with uncertainty. These frameworks differ in (i) their underlying notion of uncertainty, (ii) the way in which uncertainties are manipulated, and (iii) the way in which uncertainty is associated with the facts and rules o ..."
Abstract

Cited by 44 (6 self)
 Add to MetaCart
Numerous frameworks have been proposed in recent years for deductive databases with uncertainty. These frameworks differ in (i) their underlying notion of uncertainty, (ii) the way in which uncertainties are manipulated, and (iii) the way in which uncertainty is associated with the facts and rules of a program. On the basis of (iii), these frameworks can be classified into implication based (IB) and annotation based (AB) frameworks. In this paper, we develop a generic framework called the parametric framework as a unifying umbrella for IB frameworks. We develop the declarative, fixpoint, and prooftheoretic semantics of programs in the parametric framework and show their equivalence. Using this framework as a basis, we study the query optimization problem of containment of conjunctive queries in this framework, and establish necessary and sufficient conditions for containment for several classes of parametric conjunctive queries. Our results yield tools for use in the query optimization for large classes of query programs in IB deductive databases with uncertainty.
Loglinear Models for FirstOrder Probabilistic Reasoning
 In Proceedings of the Fifteenth Conference on Uncertainty in Artificial Intelligence
, 1999
"... Recent work on loglinear models in probabilistic constraint logic programming is applied to firstorder probabilistic reasoning. Probabilities are defined directly on the proofs of atomic formulae, and by marginalisation on the atomic formulae themselves. We use Stochastic Logic Programs (SLPs) com ..."
Abstract

Cited by 35 (5 self)
 Add to MetaCart
Recent work on loglinear models in probabilistic constraint logic programming is applied to firstorder probabilistic reasoning. Probabilities are defined directly on the proofs of atomic formulae, and by marginalisation on the atomic formulae themselves. We use Stochastic Logic Programs (SLPs) composed of labelled and unlabelled definite clauses to define the proof probabilities. We have a conservative extension of firstorder reasoning, so that, for example, there is a oneone mapping between logical and random variables. We show how, in this framework, Inductive Logic Programming (ILP) can be used to induce the features of a loglinear model from data. We also compare the presented framework with other approaches to firstorder probabilistic reasoning. Keywords: loglinear models, constraint logic programming, inductive logic programming 1 Introduction A framework which merges firstorder logical and probabilistic inference in a theoretically sound and applicable manner promises ma...
Probabilistic Logic Learning
 ACMSIGKDD Explorations: Special issue on MultiRelational Data Mining
, 2004
"... The past few years have witnessed an significant interest in probabilistic logic learning, i.e. in research lying at the intersection of probabilistic reasoning, logical representations, and machine learning. A rich variety of di#erent formalisms and learning techniques have been developed. This pap ..."
Abstract

Cited by 34 (8 self)
 Add to MetaCart
The past few years have witnessed an significant interest in probabilistic logic learning, i.e. in research lying at the intersection of probabilistic reasoning, logical representations, and machine learning. A rich variety of di#erent formalisms and learning techniques have been developed. This paper provides an introductory survey and overview of the stateof theart in probabilistic logic learning through the identification of a number of important probabilistic, logical and learning concepts.
Query evaluation in probabilistic relational databases
 Theoretical Computer Science
, 1997
"... This paper describes a generalization of the relational model in order to capture and manipulate a type of probabilistic information. Probabilistic databases are formalized by means of logic theories based on a probabilistic firstorder language proposed by Halpern. A sound a complete method is desc ..."
Abstract

Cited by 32 (0 self)
 Add to MetaCart
This paper describes a generalization of the relational model in order to capture and manipulate a type of probabilistic information. Probabilistic databases are formalized by means of logic theories based on a probabilistic firstorder language proposed by Halpern. A sound a complete method is described for evaluating queries in probabilistic theories. The generalization proposed can be incorporated into existing relational systems with the addition of a component for manipulating propositional formulas. 1
Antitonic Logic Programs
, 2001
"... In a previous work we have de ned Monotonic Logic Programs which extend definite logic programming to arbitrary complete lattices of truthvalues with an appropriate notion of implication. We have shown elsewhere that this framework is general enough to capture Generalized Annotated Logic Programs, ..."
Abstract

Cited by 30 (10 self)
 Add to MetaCart
In a previous work we have de ned Monotonic Logic Programs which extend definite logic programming to arbitrary complete lattices of truthvalues with an appropriate notion of implication. We have shown elsewhere that this framework is general enough to capture Generalized Annotated Logic Programs, Probabilistic Deductive Databases, Possibilistic Logic Programming, Hybrid Probabilistic Logic Programs and Fuzzy Logic Programming [3, 4]. However, none of these semantics define a form of nonmonotonic negation, which is fundamental for several knowledge representation applications. In the spirit of our previous work, we generalise our framework of Monotonic Logic Programs to allow for rules with arbitrary antitonic bodies over general complete lattices, of which normal programs are a special case. We then show that all the standard logic programming theoretical results carry over to Antitonic Logic Programs, defining Stable Model and Wellfounded Model alike semantics.
Probabilistic Logic Programming and Bayesian Networks
 In Asian Computing Science Conference
, 1995
"... We present a probabilistic logic programming framework that allows the representation of conditional probabilities. While conditional probabilities are the most commonly used method for representing uncertainty in probabilistic expert systems, they have been largely neglected by work in quantitative ..."
Abstract

Cited by 30 (0 self)
 Add to MetaCart
We present a probabilistic logic programming framework that allows the representation of conditional probabilities. While conditional probabilities are the most commonly used method for representing uncertainty in probabilistic expert systems, they have been largely neglected by work in quantitative logic programming. We define a fixpoint theory, declarative semantics, and proof procedure for the new class of probabilistic logic programs. Compared to other approaches to quantitative logic programming, we provide a true probabilistic framework with potential applications in probabilistic expert systems and decision support systems. We also discuss the relationship between such programs and Bayesian networks, thus moving toward a unification of two major approaches to automated reasoning. To appear in Proceedings of the 1995 Asian Computing Science Conference, Pathumthani, Thailand, December 1995. This work was partially supported by NSF grant IRI9509165. 1 Introduction Reasoning u...
Probabilistic Agent Programs
, 2000
"... Agents are small programs that autonomously take actions based on changes... In this paper, we propose the concept of a probabilistic agent program and show how, given an arbitrary program written in any imperative language, we may build a declarative "probabilistic" agent program on top of it which ..."
Abstract

Cited by 26 (9 self)
 Add to MetaCart
Agents are small programs that autonomously take actions based on changes... In this paper, we propose the concept of a probabilistic agent program and show how, given an arbitrary program written in any imperative language, we may build a declarative "probabilistic" agent program on top of it which supports decision making in the presence of uncertainty. We provide two alternative semantics for probabilistic agent programs. We show that the second semantics, though more epistemically appealing, is more complex to compute. We provide sound and complete algorithms to compute the semantics of positive agent programs.
On A Theory of Probabilistic Deductive Databases
 THEORY AND PRACTICE OF LOGIC PROGRAMMING
, 2001
"... We propose a framework for modeling uncertainty where both belief and doubt can be given independent, firstclass status. We adopt probability theory as the mathematical formalism for manipulating uncertainty. An agent can express the uncertainty in her knowledge about a piece of information in the ..."
Abstract

Cited by 26 (0 self)
 Add to MetaCart
We propose a framework for modeling uncertainty where both belief and doubt can be given independent, firstclass status. We adopt probability theory as the mathematical formalism for manipulating uncertainty. An agent can express the uncertainty in her knowledge about a piece of information in the form of a confidence level, consisting of a pair of intervals of probability, one for each of her belief and doubt. The space of confidence levels naturally leads to the notion of a trilattice, similar in spirit to Fitting's bilattices. Intuitively, the points in such a trilattice can be ordered according to truth, information, or precision. We develop a framework for probabilistic deductive databases by associating confidence levels with the facts and rules of a classical deductive database. While the trilattice structure offers a variety of choices for defining the semantics of probabilistic deductive databases, our choice of semantics is based on the truthordering, which we find to be closest to the classical framework for deductive databases. In addition to proposing a declarative semantics based on valuations and an equivalent semantics based on fixpoint theory, we also propose a proof procedure and prove it sound and complete. We show that while classical Datalog query programs have a polynomial time data complexity, certain query programs in the probabilistic deductive database framework do not even terminate on some input databases. We identify a large natural class of query programs of practical interest in our framework, and show that programs in this class possess polynomial time data complexity, i.e. not only do they terminate on every input database, they are guaranteed to do so in a number of steps polynomial in the input database size.