Results 1  10
of
12
Probabilistic Default Reasoning with Conditional Constraints
 ANN. MATH. ARTIF. INTELL
, 2000
"... We present an approach to reasoning from statistical and subjective knowledge, which is based on a combination of probabilistic reasoning from conditional constraints with approaches to default reasoning from conditional knowledge bases. More precisely, we introduce the notions of , lexicographic, ..."
Abstract

Cited by 35 (20 self)
 Add to MetaCart
We present an approach to reasoning from statistical and subjective knowledge, which is based on a combination of probabilistic reasoning from conditional constraints with approaches to default reasoning from conditional knowledge bases. More precisely, we introduce the notions of , lexicographic, and conditional entailment for conditional constraints, which are probabilistic generalizations of Pearl's entailment in system , Lehmann's lexicographic entailment, and Geffner's conditional entailment, respectively. We show that the new formalisms have nice properties. In particular, they show a similar behavior as referenceclass reasoning in a number of uncontroversial examples. The new formalisms, however, also avoid many drawbacks of referenceclass reasoning. More precisely, they can handle complex scenarios and even purely probabilistic subjective knowledge as input. Moreover, conclusions are drawn in a global way from all the available knowledge as a whole. We then show that the new formalisms also have nice general nonmonotonic properties. In detail, the new notions of , lexicographic, and conditional entailment have similar properties as their classical counterparts. In particular, they all satisfy the rationality postulates proposed by Kraus, Lehmann, and Magidor, and they have some general irrelevance and direct inference properties. Moreover, the new notions of  and lexicographic entailment satisfy the property of rational monotonicity. Furthermore, the new notions of , lexicographic, and conditional entailment are proper generalizations of both their classical counterparts and the classical notion of logical entailment for conditional constraints. Finally, we provide algorithms for reasoning under the new formalisms, and we analyze its computational com...
Decision Making in a Context where Uncertainty is Represented by Belief Functions.
, 2000
"... A quantified model to represent uncertainty is incomplete if its use in a decision environment is not explained. When belief functions were first introduced to represent quantified uncertainty, no associated decision model was proposed. Since then, it became clear that the belief functions meani ..."
Abstract

Cited by 26 (2 self)
 Add to MetaCart
A quantified model to represent uncertainty is incomplete if its use in a decision environment is not explained. When belief functions were first introduced to represent quantified uncertainty, no associated decision model was proposed. Since then, it became clear that the belief functions meaning is multiple. The models based on belief functions could be understood as an upper and lower probabilities model, as the hint model, as the transferable belief model and as a probability model extended to modal propositions. These models are mathematically identical at the static level, their behaviors diverge at their dynamic level (under conditioning and/or revision). For decision making, some authors defend that decisions must be based on expected utilities, in which case a probability function must be determined. When uncertainty is represented by belief functions, the choice of the appropriate probability function must be explained and justified. This probability function doe...
Learning Default Concepts
 In Proceedings of the Tenth Canadian Conference on Artificial Intelligence (CSCSI94
, 1994
"... Classical concepts, based on necessary and sufficient defining conditions, cannot classify logically insufficient object descriptions. Many reasoning systems avoid this limitation by using "default concepts" to classify incompletely described objects. This paper addresses the task of learning such d ..."
Abstract

Cited by 20 (7 self)
 Add to MetaCart
Classical concepts, based on necessary and sufficient defining conditions, cannot classify logically insufficient object descriptions. Many reasoning systems avoid this limitation by using "default concepts" to classify incompletely described objects. This paper addresses the task of learning such default concepts from observational data. We first model the underlying performance task  classifying incomplete examples  as a probabilistic process that passes random test examples through a "blocker" that can hide object attributes from the classifier. We then address the task of learning accurate default concepts from random training examples. After surveying the learning techniques that have been proposed for this task in the machine learning and knowledge representation literatures, and investigating their relative merits, we present a more dataefficient learning technique, developed from wellknown statistical principles. Finally, we extend Valiant's pac learning framework to ...
A Data Model and Algebra for Probabilistic Complex Values
 Annals of Mathematics and Artificial Intelligence
, 2000
"... We present a probabilistic data model for complex values. More precisely, we introduce probabilistic complex value relations, which combine the concept of probabilistic relations with the idea of complex values in a uniform framework. We elaborate a modeltheoretic definition of probabilistic combina ..."
Abstract

Cited by 17 (4 self)
 Add to MetaCart
We present a probabilistic data model for complex values. More precisely, we introduce probabilistic complex value relations, which combine the concept of probabilistic relations with the idea of complex values in a uniform framework. We elaborate a modeltheoretic definition of probabilistic combination strategies, which has a rigorous foundation on probability theory. We then define an algebra for querying database instances, which comprises the operations of selection, projection, renaming, join, Cartesian product, union, intersection, and difference. We prove that our data model and algebra for probabilistic complex values generalizes the classical relational data model and algebra. Moreover, we show that under certain assumptions, all our algebraic operations are tractable. We finally show that most of the query equivalences of classical relational algebra carry over to our algebra on probabilistic complex value relations. Hence, query optimization techniques for classical relational algebra can easily be applied to optimize queries on probabilistic complex value relations. Keywords: Complex value databases, probabilistic databases, data model, relational algebra, query languages. AMS Subject classification: Primary 68P15, 68P20; Secondary 68T30, 68T37 1.
Knowledge Representation in the TRAINS93 Conversation System
"... We describe the goals, architecture, and functioning of the trains93 system, with emphasis on the representational issues involved in putting together a complex language processing and reasoning agent. The system is intended as an experimental prototype of an intelligent, conversationally profici ..."
Abstract

Cited by 12 (7 self)
 Add to MetaCart
We describe the goals, architecture, and functioning of the trains93 system, with emphasis on the representational issues involved in putting together a complex language processing and reasoning agent. The system is intended as an experimental prototype of an intelligent, conversationally proficient planning advisor in a dynamic domain of cargo trains and factories. For this team effort, our strategy at the outset was to let the designers of the various language processing, discourse processing, plan reasoning, execution and monitoring modules choose whatever representations seemed best suited for their tasks, but with the constraint that all should strive for principled, general approaches. Disparities between modules were bridged by careful design of the interfaces, based on regular indepth discussion of issues encountered by the participants. Because of the goal of generality and principled representation, the multiple representations ended up with a good deal in common (...
The Value of Using Imprecise Probabilities in Engineering Design
 ASME 2005 DETC DTM
, 2005
"... Imprecision, imprecise probabilities, epistemic uncertainty, aleatory uncertainty, engineering ..."
Abstract

Cited by 12 (6 self)
 Add to MetaCart
Imprecision, imprecise probabilities, epistemic uncertainty, aleatory uncertainty, engineering
Learning to Classify Incomplete Examples
 In Computational Learning Theory and Natural Learning Systems: Addressing Real World Tasks
, 1993
"... Most research on supervised learning assumes the attributes of training and test examples are completely specified. Realworld data, however, is often incomplete. This paper studies the task of learning to classify incomplete test examples, given incomplete (resp., complete) training data. We first ..."
Abstract

Cited by 10 (2 self)
 Add to MetaCart
Most research on supervised learning assumes the attributes of training and test examples are completely specified. Realworld data, however, is often incomplete. This paper studies the task of learning to classify incomplete test examples, given incomplete (resp., complete) training data. We first show that the performance task of classifying incomplete examples requires the use of default classification functions which demonstrate nonmonotonic classification behavior. We then extend the standard paclearning model to allow attribute values to be hidden from the classifier, investigate the robustness of various learning strategies, and study the sample complexity of learning classes of default classification functions from examples. 1 Introduction The central task of most expert systems is classifying objects from some domain of application; i.e., determining whether a particular object belongs to a specified class, given a description of that object (Clancey, 1985). For example, a ...
Databases for Interval Probabilities
, 2004
"... We present a database framework for the efficient storage and manipulation of interval probability distributions and their associated information. While work on interval probabilities and on probabilistic databases has appeared before, ours is the first to combine these into a coherent and mathemati ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
We present a database framework for the efficient storage and manipulation of interval probability distributions and their associated information. While work on interval probabilities and on probabilistic databases has appeared before, ours is the first to combine these into a coherent and mathematically sound framework including both standard relational queries and queries based on probability theory. In particular, our query algebra allows users not only to query existing interval probability distributions, but also to construct new ones by means of conditionalization and marginalization, as well as other more common database
Conditionalization for Interval Probabilities
 PROC. WORKSHOP ON CONDITIONALS, INFORMATION, AND INFERENCE
, 2002
"... Conditionalization, i.e., computation of a conditional probability distribution given a joint probability distribution of two or more random variables is an important operation in some probabilistic database models. While the computation of the conditional probability distribution is straightforward ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
Conditionalization, i.e., computation of a conditional probability distribution given a joint probability distribution of two or more random variables is an important operation in some probabilistic database models. While the computation of the conditional probability distribution is straightforward when the exact point probabilities are involved, it is often the case that such exact point probability distributions of random variables are not known, but are known to lie in a particular interval. This paper investigates the conditionalization operation for interval probability distribution functions under a possible world semantics. In particular, given a joint probability distribution of two or more random variables, where the probability of each outcome is represented as an interval, we (i) provide formal modeltheoretic semantics; (ii) define the operation of conditionalization and (iii) provide a closed form solution/efficient algorithm to compute the conditional probability distribution.
Combinatorial Semantics:semantics For Frequent Validity
 Computational Intelligence
, 1993
"... INTRODUCTION Uncertain reasoning and uncertain argument, as we are concerned with them here, are reasoning and argument in which the object is to establish the credibility or acceptability of a conclusion on the basis of an argument from premises that do not entail that conclusion. Other terms for ..."
Abstract
 Add to MetaCart
INTRODUCTION Uncertain reasoning and uncertain argument, as we are concerned with them here, are reasoning and argument in which the object is to establish the credibility or acceptability of a conclusion on the basis of an argument from premises that do not entail that conclusion. Other terms for the process: inductive reasoning, scientific reasoning, nonmonotonic reasoning, probabilistic reasoning. What we seek to characterize is that general form of argument that will lead to conclusions that are worth accepting, but that may, on the basis of new evidence, be withdrawn. What is explicitly excluded from probabilistic reasoning, in the sense under discussion, is reasoning from one probability statement to another. Genesereth and Nilsson [GN87] and Nilsson [Nil86] for example, offer as an example of their "probabilistic logic" the way in which constraints on the probability of Q can be established on the basis of probabilities for P and for<F53