Results 1  10
of
23
Supporting ValidTime Indeterminacy
 ACM Transactions on Database Systems
, 1998
"... In validtime indeterminacy it is known that an event stored in a database did in fact occur, but it is not known exactly when. In this paper we extend the SQL data model and query language to support validtime indeterminacy. We represent the occurrence time of an event with a set of possible insta ..."
Abstract

Cited by 86 (17 self)
 Add to MetaCart
In validtime indeterminacy it is known that an event stored in a database did in fact occur, but it is not known exactly when. In this paper we extend the SQL data model and query language to support validtime indeterminacy. We represent the occurrence time of an event with a set of possible instants, delimiting when the event might have occurred, and a probability distribution over that set. We also describe query language constructs to retrieve information in the presence of indeterminacy. These constructs enable users to specify their credibility in the underlying data and their plausibility in the relationships among that data. A denotational semantics for SQL’s select statement with optional credibility and plausibility constructs is given. We show that this semantics is reliable, in that it never produces incorrect information, is maximal, in that if it were extended to be more informative, the results may not be reliable, and reduces to the previous semantics when there is no indeterminacy. Although the extended data model and query language provide needed modeling capabilities, these extensions appear initially to carry a significant execution cost. A contribution of this paper is to demonstrate that our approach is useful and practical. An efficient representation of validtime indeterminacy and efficient query processing algorithms are provided. The cost of
Models for Incomplete and Probabilistic Information
 IEEE Data Engineering Bulletin
, 2006
"... Abstract. We discuss, compare and relate some old and some new models for incomplete and probabilistic databases. We characterize the expressive power of ctables over infinite domains and we introduce a new kind of result, algebraic completion, for studying less expressive models. By viewing probab ..."
Abstract

Cited by 63 (9 self)
 Add to MetaCart
Abstract. We discuss, compare and relate some old and some new models for incomplete and probabilistic databases. We characterize the expressive power of ctables over infinite domains and we introduce a new kind of result, algebraic completion, for studying less expressive models. By viewing probabilistic models as incompleteness models with additional probability information, we define completeness and closure under query languages of general probabilistic database models and we introduce a new such model, probabilistic ctables, that is shown to be complete and closed under the relational algebra. 1
A probabilistic framework for vague queries and imprecise information in databases
 PROCEEDINGS OF THE 16TH INTERNATIONAL CONFERENCE ON VERY LARGE DATABASES
, 1990
"... A probabilistic learning model for vague queries and missing or imprecise information in databases is described. Instead of retrieving only a set of answers, our approach yields a ranking of objects from the database in response to a query. By using relevance judgements from the user about the objec ..."
Abstract

Cited by 58 (13 self)
 Add to MetaCart
A probabilistic learning model for vague queries and missing or imprecise information in databases is described. Instead of retrieving only a set of answers, our approach yields a ranking of objects from the database in response to a query. By using relevance judgements from the user about the objects retrieved, the ranking for the actual query as well as the overall retrieval quality of the system can be further improved. For specifying different kinds of conditions in vague queries, the notion of vague predicates is introduced. Based on the underlying probabilistic model, also imprecise or missing attribute values can be treated easily. In addition, the corresponding formulas can be applied in combination with standard predicates (from twovalued logic), thus extending standard database systems for coping with missing or imprecise data.
Probabilistic Deductive Databases
, 1994
"... Knowledgebase (KB) systems must typically deal with imperfection in knowledge, e.g. in the form of imcompleteness, inconsistency, uncertainty, to name a few. Currently KB system development is mainly based on the expert system technology. Expert systems, through their support for rulebased program ..."
Abstract

Cited by 57 (2 self)
 Add to MetaCart
Knowledgebase (KB) systems must typically deal with imperfection in knowledge, e.g. in the form of imcompleteness, inconsistency, uncertainty, to name a few. Currently KB system development is mainly based on the expert system technology. Expert systems, through their support for rulebased programming, uncertainty, etc., offer a convenient framework for KB system development. But they require the user to be well versed with the low level details of system implementation. The manner in which uncertainty is handled has little mathematical basis. There is no decent notion of query optimization, forcing the user to take the responsibility for an efficient implementation of the KB system. We contend KB system development can and should take advantage of the deductive database technology, which overcomes most of the above limitations. An important problem here is to extend deductive databases into providing a systematic basis for rulebased programming with imperfect knowledge. In this paper, we are interested in an exension handling probabilistic knowledge.
Current Approaches to Handling Imperfect Information in Data and Knowledge Bases
, 1996
"... This paper surveys methods for representing and reasoning with imperfect information. It opens with an attempt to classify the different types of imperfection that may pervade data, and a discussion of the sources of such imperfections. The classification is then used as a framework for considering ..."
Abstract

Cited by 52 (1 self)
 Add to MetaCart
This paper surveys methods for representing and reasoning with imperfect information. It opens with an attempt to classify the different types of imperfection that may pervade data, and a discussion of the sources of such imperfections. The classification is then used as a framework for considering work that explicitly concerns the representation of imperfect information, and related work on how imperfect information may be used as a basis for reasoning. The work that is surveyed is drawn from both the field of databases and the field of artificial intelligence. Both of these areas have long been concerned with the problems caused by imperfect information, and this paper stresses the relationships between the approaches developed in each.
Query evaluation in probabilistic relational databases
 Theoretical Computer Science
, 1997
"... This paper describes a generalization of the relational model in order to capture and manipulate a type of probabilistic information. Probabilistic databases are formalized by means of logic theories based on a probabilistic firstorder language proposed by Halpern. A sound a complete method is desc ..."
Abstract

Cited by 32 (0 self)
 Add to MetaCart
This paper describes a generalization of the relational model in order to capture and manipulate a type of probabilistic information. Probabilistic databases are formalized by means of logic theories based on a probabilistic firstorder language proposed by Halpern. A sound a complete method is described for evaluating queries in probabilistic theories. The generalization proposed can be incorporated into existing relational systems with the addition of a component for manipulating propositional formulas. 1
On A Theory of Probabilistic Deductive Databases
 THEORY AND PRACTICE OF LOGIC PROGRAMMING
, 2001
"... We propose a framework for modeling uncertainty where both belief and doubt can be given independent, firstclass status. We adopt probability theory as the mathematical formalism for manipulating uncertainty. An agent can express the uncertainty in her knowledge about a piece of information in the ..."
Abstract

Cited by 26 (0 self)
 Add to MetaCart
We propose a framework for modeling uncertainty where both belief and doubt can be given independent, firstclass status. We adopt probability theory as the mathematical formalism for manipulating uncertainty. An agent can express the uncertainty in her knowledge about a piece of information in the form of a confidence level, consisting of a pair of intervals of probability, one for each of her belief and doubt. The space of confidence levels naturally leads to the notion of a trilattice, similar in spirit to Fitting's bilattices. Intuitively, the points in such a trilattice can be ordered according to truth, information, or precision. We develop a framework for probabilistic deductive databases by associating confidence levels with the facts and rules of a classical deductive database. While the trilattice structure offers a variety of choices for defining the semantics of probabilistic deductive databases, our choice of semantics is based on the truthordering, which we find to be closest to the classical framework for deductive databases. In addition to proposing a declarative semantics based on valuations and an equivalent semantics based on fixpoint theory, we also propose a proof procedure and prove it sound and complete. We show that while classical Datalog query programs have a polynomial time data complexity, certain query programs in the probabilistic deductive database framework do not even terminate on some input databases. We identify a large natural class of query programs of practical interest in our framework, and show that programs in this class possess polynomial time data complexity, i.e. not only do they terminate on every input database, they are guaranteed to do so in a number of steps polynomial in the input database size.
Probabilistic Knowledge Bases
, 1992
"... We define a new fixpoint semantics for rulebased reasoning in the presence of imprecise information. We first demonstrate the need to have such a rulebased semantics by showing a realworld application requiring such reasoning. We then define this semantics. Optimizations and approximations of the ..."
Abstract

Cited by 22 (9 self)
 Add to MetaCart
We define a new fixpoint semantics for rulebased reasoning in the presence of imprecise information. We first demonstrate the need to have such a rulebased semantics by showing a realworld application requiring such reasoning. We then define this semantics. Optimizations and approximations of the semantics are shown so as to make the semantics amenable to very large scale realworld applications. We finally prove that the semantics is probabilistic and reduces to the usual fixpoint semantics of stratified Datalog if all information is certain. 2 Index Terms axiomatic probability theory, incomplete information, knowledge discovery in databases, logic programming, query optimization and approximation, stratified Datalog 3 1 Introduction Many real world problems cannot be described or solved by deterministic information because of inherent vagueness (e.g., see [4, 10]). We demonstrate the truth of this statement in a realworld application to which we have successfully applied th...
Knowledge Discovery in Databases
, 1994
"... This is a draft of a manuscript of a postgraduate course taught at the Hong Kong University of Science and Technology in Spring 94. The course gives an introduction into the young and fascinating field of knowledge discovery in databases. The manuscript is suited for beginners who can leave out the ..."
Abstract

Cited by 8 (4 self)
 Add to MetaCart
This is a draft of a manuscript of a postgraduate course taught at the Hong Kong University of Science and Technology in Spring 94. The course gives an introduction into the young and fascinating field of knowledge discovery in databases. The manuscript is suited for beginners who can leave out the more advanced sections, as well as people who would like to do research in this area. This manuscript is partly incomplete. Especially, the last section discussing approaches to learn knowledge involving time is missing. Contents 1 Introduction 2 1.1 Course Outline : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 2 1.2 Basic Notions : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 3 1.3 A Case Study : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 6 1.4 Outlook : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 9 LIST OF FIGURES 2 2 Rule Languages 15 2.1 Proposit...
Historical Indeterminacy
, 1992
"... In historical indeterminacy, it is known that an event stored in a temporal database did in fact occur, but it is not known exactly when the event occurred. We present the possible tuples data model, in which each indeterminate event is represented with a set of possible events that delimits when ..."
Abstract

Cited by 7 (3 self)
 Add to MetaCart
In historical indeterminacy, it is known that an event stored in a temporal database did in fact occur, but it is not known exactly when the event occurred. We present the possible tuples data model, in which each indeterminate event is represented with a set of possible events that delimits when the event might have occurred, and a probability distribution over that set. We extend the TQuel query language with constructs that specify the user's credibility in the underlying historical data and in the user's plausibility in the relationships among that data. We provide a formal tuple calculus semantics, and show that this semantics reduces to the determinate semantics. We outline an efficient representation of historical indeterminacy, and efficient query processing algorithms, demonstrating the practicality of our proposed approach. 1 Department of Computer Science University of Arizona Tucson, AZ 85721 fcurtis,rtsg@cs.arizona.edu Historical Indeterminacy 1 Copyright c...