Results 1  10
of
41
Probabilistic Logic Programming
, 1992
"... Of all scientific investigations into reasoning with uncertainty and chance, probability theory is perhaps the best understood paradigm. Nevertheless, all studies conducted thus far into the semantics of quantitative logic programming (cf. van Emden [51], Fitting [18, 19, 20], Blair and Subrahmanian ..."
Abstract

Cited by 131 (7 self)
 Add to MetaCart
Of all scientific investigations into reasoning with uncertainty and chance, probability theory is perhaps the best understood paradigm. Nevertheless, all studies conducted thus far into the semantics of quantitative logic programming (cf. van Emden [51], Fitting [18, 19, 20], Blair and Subrahmanian [5, 6, 49, 50], Kifer et al [29, 30, 31]) have restricted themselves to nonprobabilistic semantical characterizations. In this paper, we take a few steps towards rectifying this situation. We define a logic programming language that is syntactically similar to the annotated logics of [5, 6], but in which the truth values are interpreted probabilistically. A probabilistic model theory and fixpoint theory is developed for such programs. This probabilistic model theory satisfies the requirements proposed by Fenstad [16] for a function to be called probabilistic. The logical treatment of probabilities is complicated by two facts: first, that the connectives cannot be interpreted truth function...
Hybrid Probabilistic Programs
 Journal of Logic Programming
, 1997
"... The precise probability of a compound event (e.g. e1 e2 ; e1 e2) depends upon the known relationships (e.g. independence, mutual exclusion, ignorance of any relationship, etc.) between the primitive events that constitute the compound event. To date, most research on probabilistic logic programmin ..."
Abstract

Cited by 70 (1 self)
 Add to MetaCart
The precise probability of a compound event (e.g. e1 e2 ; e1 e2) depends upon the known relationships (e.g. independence, mutual exclusion, ignorance of any relationship, etc.) between the primitive events that constitute the compound event. To date, most research on probabilistic logic programming [20, 19, 22, 23, 24] has assumed that we are ignorant of the relationship between primitive events. Likewise, most research in AI (e.g. Bayesian approaches) have assumed that primitive events are independent. In this paper, we propose a hybrid probabilistic logic programming language in which the user can explicitly associate, with any given probabilistic strategy, a conjunction and disjunction operator, and then write programs using these operators. We describe the syntax of hybrid probabilistic programs, and develop a model theory and fixpoint theory for such programs. Last, but not least, we develop three alternative procedures to answer queries, each of which is guaranteed to be sound ...
A Semantical Framework for Supporting Subjective and Conditional Probabilities in Deductive Databases
 Journal of Automated Reasoning
, 1991
"... We present a theoretical basis for supporting subjective and conditional probabilities in deductive databases. We design a language that allows a user greater expressive power than classical logic programming. In particular, a user can express the fact that A is possible (i.e. A has nonzero probabi ..."
Abstract

Cited by 60 (4 self)
 Add to MetaCart
We present a theoretical basis for supporting subjective and conditional probabilities in deductive databases. We design a language that allows a user greater expressive power than classical logic programming. In particular, a user can express the fact that A is possible (i.e. A has nonzero probability), B is possible, but (A B) as a whole is impossible. A user can also freely specify probability annotations that may contain variables. The focus of this paper is to study the semantics of programs written in such a language in relation to probability theory. Our model theory which is founded on the classical one captures the uncertainty described in a probabilistic program at the level of Herbrand interpretations. Furthermore, we develop a fixpoint theory and a proof procedure for such programs and present soundness and completeness results. Finally we characterize the relationships between probability theory and the fixpoint, model, and proof theory of our programs. 1 Introduction a...
Probabilistic Deductive Databases
, 1994
"... Knowledgebase (KB) systems must typically deal with imperfection in knowledge, e.g. in the form of imcompleteness, inconsistency, uncertainty, to name a few. Currently KB system development is mainly based on the expert system technology. Expert systems, through their support for rulebased program ..."
Abstract

Cited by 57 (2 self)
 Add to MetaCart
Knowledgebase (KB) systems must typically deal with imperfection in knowledge, e.g. in the form of imcompleteness, inconsistency, uncertainty, to name a few. Currently KB system development is mainly based on the expert system technology. Expert systems, through their support for rulebased programming, uncertainty, etc., offer a convenient framework for KB system development. But they require the user to be well versed with the low level details of system implementation. The manner in which uncertainty is handled has little mathematical basis. There is no decent notion of query optimization, forcing the user to take the responsibility for an efficient implementation of the KB system. We contend KB system development can and should take advantage of the deductive database technology, which overcomes most of the above limitations. An important problem here is to extend deductive databases into providing a systematic basis for rulebased programming with imperfect knowledge. In this paper, we are interested in an exension handling probabilistic knowledge.
Current Approaches to Handling Imperfect Information in Data and Knowledge Bases
, 1996
"... This paper surveys methods for representing and reasoning with imperfect information. It opens with an attempt to classify the different types of imperfection that may pervade data, and a discussion of the sources of such imperfections. The classification is then used as a framework for considering ..."
Abstract

Cited by 52 (1 self)
 Add to MetaCart
This paper surveys methods for representing and reasoning with imperfect information. It opens with an attempt to classify the different types of imperfection that may pervade data, and a discussion of the sources of such imperfections. The classification is then used as a framework for considering work that explicitly concerns the representation of imperfect information, and related work on how imperfect information may be used as a basis for reasoning. The work that is surveyed is drawn from both the field of databases and the field of artificial intelligence. Both of these areas have long been concerned with the problems caused by imperfect information, and this paper stresses the relationships between the approaches developed in each.
Discovery of Interesting Usage Patterns from Web Data
 Advances in Web Usage Analysis and User Profiling. LNAI 1836
, 1999
"... . Web Usage Mining is the application of data mining techniques to large Web data repositories in order to extract usage patterns. As with many data mining application domains, the identification of patterns that are considered interesting is a problem that must be solved in addition to simply g ..."
Abstract

Cited by 51 (1 self)
 Add to MetaCart
. Web Usage Mining is the application of data mining techniques to large Web data repositories in order to extract usage patterns. As with many data mining application domains, the identification of patterns that are considered interesting is a problem that must be solved in addition to simply generating them. A necessary step in identifying interesting results is quantifying what is considered uninteresting in order to form a basis for comparison. Several research efforts have relied on manually generated sets of uninteresting rules. However, manual generation of a comprehensive set of evidence about beliefs for a particular domain is impractical in many cases. Generally, domain knowledge can be used to automatically create evidence for or against a set of beliefs. This paper develops a quantitative model based on support logic for determining the interestingness of discovered patterns. For Web Usage Mining, there are three types of domain information available; usage, co...
Artificial Reasoning with Subjective Logic
, 1997
"... This paper defines a framework for artificial reasoning called Subjective Logic, which consists of a belief model called opinion and set of operations for combining opinions. Subjective Logic is an extension of standard logic that uses continuous uncertainty and belief parameters instead of only ..."
Abstract

Cited by 39 (8 self)
 Add to MetaCart
This paper defines a framework for artificial reasoning called Subjective Logic, which consists of a belief model called opinion and set of operations for combining opinions. Subjective Logic is an extension of standard logic that uses continuous uncertainty and belief parameters instead of only discrete truth values. It can also be seen as an extension of classical probability calculus by using a second order probability representation instead of the standard first order representation. In addition to the standard logical operations, Subjective Logic contains some operations specific for belief theory such as consensus and recommendation. In particular, we show that Dempster's consensus rule is inconsistent with Bayes' rule and therefore is wrong, and provide an alternative rule with a solid mathematical basis. Subjective Logic is directly compatible with traditional mathematical frameworks, but is also suitable for handling ignorance and uncertainty which is required in artificial...
TrustBased Decision Making For Electronic Transactions
 Proceedings of the Fourth Nordic Workshop on Secure Computer Systems (NORDSEC'99
, 1999
"... Financial transactions that are made in an environment of imperfect knowledge will always contain a degree of risk. When dealing with humans or human organisations the relative knowledge about the cooperative behaviour of others can be perceived as trust, and trust therefore is a crucial factor in ..."
Abstract

Cited by 35 (7 self)
 Add to MetaCart
Financial transactions that are made in an environment of imperfect knowledge will always contain a degree of risk. When dealing with humans or human organisations the relative knowledge about the cooperative behaviour of others can be perceived as trust, and trust therefore is a crucial factor in the decision making process. However, assessing trust becomes a problem in electronic transactions due to the impersonal aspects of computer networks. This paper proposes a scheme for propagating trust through computer networks based on public key certificates and trust relationships, and demonstrates how the resulting measures of trust can be used for making decisions about electronic transactions. 1 INTRODUCTION The concept of trust has received considerable attention in the information security literature, but it is rarely clear what people exactly mean by it. In a way, trust and security are just two sides of the same thing, because if a system is (in)secure, it is (dis)trusted, and if...
Probabilistic Temporal Databases, I: Algebra
"... ... In this paper, we first introduce the syntax of TemporalProbabilistic (TP) relations and then show how they can be converted to an explicit, significantly more spaceconsuming form called Annotated Relations. We then present a Theoretical Annotated Temporal Algebra (TATA). Being explicit, TATA ..."
Abstract

Cited by 26 (6 self)
 Add to MetaCart
... In this paper, we first introduce the syntax of TemporalProbabilistic (TP) relations and then show how they can be converted to an explicit, significantly more spaceconsuming form called Annotated Relations. We then present a Theoretical Annotated Temporal Algebra (TATA). Being explicit, TATA is convenient for specifying how the algebraic operations should behave, but is impractical to use because annotated relations are overwhelmingly large. Next, we
Resolving Attribute Incompatibility in Database Integration: An Evidential Reasoning Approach
 Proc. of 10th IEEE Data Eng. Conf
, 1994
"... Resolving domain incompatibility among independently developed databases often involves uncertain information. DeMichiel [5] showed that uncertain information can be generated by the mapping of conflicting attributes to a common domain, based on some domain knowledge. In this paper, we show that unc ..."
Abstract

Cited by 24 (3 self)
 Add to MetaCart
Resolving domain incompatibility among independently developed databases often involves uncertain information. DeMichiel [5] showed that uncertain information can be generated by the mapping of conflicting attributes to a common domain, based on some domain knowledge. In this paper, we show that uncertain information can also arise when the database integration process requires information not directly represented in the component databases, but can be obtained through some summary of data. We therefore propose an extended relational model based on DempsterShafer theory of evidence[14] to incorporate such uncertain knowledge about the source databases. We also develop a full set of extended relational operations over the extended relations. In particular, an extended union operation has been formalized to combine two extended relations using Dempster's rule of combination. The closure and boundedness properties of our proposed extended operations are formulated. 1 Introduction The inc...