Results 1  10
of
155
An Analysis of FirstOrder Logics of Probability
 Artificial Intelligence
, 1990
"... : We consider two approaches to giving semantics to firstorder logics of probability. The first approach puts a probability on the domain, and is appropriate for giving semantics to formulas involving statistical information such as "The probability that a randomly chosen bird flies is greater than ..."
Abstract

Cited by 271 (18 self)
 Add to MetaCart
: We consider two approaches to giving semantics to firstorder logics of probability. The first approach puts a probability on the domain, and is appropriate for giving semantics to formulas involving statistical information such as "The probability that a randomly chosen bird flies is greater than .9." The second approach puts a probability on possible worlds, and is appropriate for giving semantics to formulas describing degrees of belief, such as "The probability that Tweety (a particular bird) flies is greater than .9." We show that the two approaches can be easily combined, allowing us to reason in a straightforward way about statistical information and degrees of belief. We then consider axiomatizing these logics. In general, it can be shown that no complete axiomatization is possible. We provide axiom systems that are sound and complete in cases where a complete axiomatization is possible, showing that they do allow us to capture a great deal of interesting reasoning about prob...
Fundamental Concepts of Qualitative Probabilistic Networks
 ARTIFICIAL INTELLIGENCE
, 1990
"... Graphical representations for probabilistic relationships have recently received considerable attention in A1. Qualitative probabilistic networks abstract from the usual numeric representations by encoding only qualitative relationships, which are inequality constraints on the joint probability dist ..."
Abstract

Cited by 119 (6 self)
 Add to MetaCart
Graphical representations for probabilistic relationships have recently received considerable attention in A1. Qualitative probabilistic networks abstract from the usual numeric representations by encoding only qualitative relationships, which are inequality constraints on the joint probability distribution over the variables. Although these constraints are insufficient to determine probabilities uniquely, they are designed to justify the deduction of a class of relative likelihood conclusions that imply useful decisionmaking properties. Two types of qualitative relationship are defined, each a probabilistic form of monotonicity constraint over a group of variables. Qualitative influences describe the direction of the relationship between two variables. Qualitative synergies describe interactions among influences. The probabilistic definitions chosen justify sound and efficient inference procedures based on graphical manipulations of the network. These procedures answer queries about qualitative relationships among variables separated in the network and determine structural properties of optimal assignments to decision variables.
PROBABILISTIC PREDICATE TRANSFORMERS
, 1995
"... Predicate transformers facilitate reasoning about imperative programs, including those exhibiting demonic nondeterministic choice. Probabilistic predicate transformers extend that facility to programs containing probabilistic choice, so that one can in principle determine not only whether a program ..."
Abstract

Cited by 107 (32 self)
 Add to MetaCart
Predicate transformers facilitate reasoning about imperative programs, including those exhibiting demonic nondeterministic choice. Probabilistic predicate transformers extend that facility to programs containing probabilistic choice, so that one can in principle determine not only whether a program is guaranteed to establish a certain result, but also its probability of doing so. We bring together independent work of Claire Jones and Jifeng He, showing how their constructions can be made to correspond � from that link between a predicatebased and a relationbased view of probabilistic execution we are able to propose `probabilistic healthiness conditions', generalising those of Dijkstra for ordinary predicate transformers. The associated calculus seems suitable for exploring further the rigorous derivation of imperative probabilistic programs.
Knowledge, probability, and adversaries
 Journal of the ACM
, 1993
"... Abstract: What should it mean for an agent toknowor believe an assertion is true with probability:99? Di erent papers [FH88, FZ88a, HMT88] givedi erent answers, choosing to use quite di erent probability spaces when computing the probability that an agent assigns to an event. We showthat each choice ..."
Abstract

Cited by 71 (23 self)
 Add to MetaCart
Abstract: What should it mean for an agent toknowor believe an assertion is true with probability:99? Di erent papers [FH88, FZ88a, HMT88] givedi erent answers, choosing to use quite di erent probability spaces when computing the probability that an agent assigns to an event. We showthat each choice can be understood in terms of a betting game. This betting game itself can be understood in terms of three types of adversaries in uencing three di erent aspects of the game. The rst selects the outcome of all nondeterministic choices in the system� the second represents the knowledge of the agent's opponent in the betting game (this is the key place the papers mentioned above di er) � the third is needed in asynchronous systems to choose the time the bet is placed. We illustrate the need for considering all three types of adversaries with a number of examples. Given a class of adversaries, we show howto assign probability spaces to agents in a way most appropriate for that class, where \most appropriate " is made precise in terms of this betting game. We conclude by showing how di erent assignments of probability spaces (corresponding to di erent opponents) yield di erent levels of guarantees in probabilistic coordinated attack.
Anonymity and Information Hiding in Multiagent Systems
, 2003
"... We provide a framework for reasoning about informationhiding requirements in multiagent systems and for reasoning about anonymity in particular. Our framework employs the modal logic of knowledge within the context of the runs and systems framework, much in the spirit of our earlier work on secrecy ..."
Abstract

Cited by 67 (2 self)
 Add to MetaCart
We provide a framework for reasoning about informationhiding requirements in multiagent systems and for reasoning about anonymity in particular. Our framework employs the modal logic of knowledge within the context of the runs and systems framework, much in the spirit of our earlier work on secrecy [9]. We give several definitions of anonymity with respect to agents, actions, and observers in multiagent systems, and we relate our definitions of anonymity to other definitions of information hiding, such as secrecy. We also give probabilistic definitions of anonymity that are able to quantify an observer's uncertainty about the state of the system. Finally, we relate our definitions of anonymity to other formalizations of anonymity and information hiding, including definitions of anonymity in the process algebra CSP and definitions of information hiding using function views.
Anytime Deduction for Probabilistic Logic
 Artif. Intell
, 1994
"... This paper proposes and investigates an approach to deduction in probabilistic logic, using as its medium a language that generalizes the propositional version of Nilsson's probabilistic logic by incorporating conditional probabilities. Unlike many other approaches to deduction in probabilistic logi ..."
Abstract

Cited by 62 (1 self)
 Add to MetaCart
This paper proposes and investigates an approach to deduction in probabilistic logic, using as its medium a language that generalizes the propositional version of Nilsson's probabilistic logic by incorporating conditional probabilities. Unlike many other approaches to deduction in probabilistic logic, this approach is based on inference rules and therefore can produce proofs to explain how conclusions are drawn. We show how these rules can be incorporated into an anytime deduction procedure that proceeds by computing increasingly narrow probability intervals that contain the tightest entailed probability interval. Since the procedure can be stopped at any time to yield partial information concerning the probability range of any entailed sentence, one can make a tradeoff between precision and computation time. The deduction method presented here contrasts with other methods whose ability to perform logical reasoning is either limited or requires finding all truth assignments consistent ...
Managing Uncertainty and Vagueness in Description Logics for the Semantic Web
, 2007
"... Ontologies play a crucial role in the development of the Semantic Web as a means for defining shared terms in web resources. They are formulated in web ontology languages, which are based on expressive description logics. Significant research efforts in the semantic web community are recently direct ..."
Abstract

Cited by 58 (7 self)
 Add to MetaCart
Ontologies play a crucial role in the development of the Semantic Web as a means for defining shared terms in web resources. They are formulated in web ontology languages, which are based on expressive description logics. Significant research efforts in the semantic web community are recently directed towards representing and reasoning with uncertainty and vagueness in ontologies for the Semantic Web. In this paper, we give an overview of approaches in this context to managing probabilistic uncertainty, possibilistic uncertainty, and vagueness in expressive description logics for the Semantic Web.
Probabilistic Deductive Databases
, 1994
"... Knowledgebase (KB) systems must typically deal with imperfection in knowledge, e.g. in the form of imcompleteness, inconsistency, uncertainty, to name a few. Currently KB system development is mainly based on the expert system technology. Expert systems, through their support for rulebased program ..."
Abstract

Cited by 57 (2 self)
 Add to MetaCart
Knowledgebase (KB) systems must typically deal with imperfection in knowledge, e.g. in the form of imcompleteness, inconsistency, uncertainty, to name a few. Currently KB system development is mainly based on the expert system technology. Expert systems, through their support for rulebased programming, uncertainty, etc., offer a convenient framework for KB system development. But they require the user to be well versed with the low level details of system implementation. The manner in which uncertainty is handled has little mathematical basis. There is no decent notion of query optimization, forcing the user to take the responsibility for an efficient implementation of the KB system. We contend KB system development can and should take advantage of the deductive database technology, which overcomes most of the above limitations. An important problem here is to extend deductive databases into providing a systematic basis for rulebased programming with imperfect knowledge. In this paper, we are interested in an exension handling probabilistic knowledge.
Random Worlds and Maximum Entropy
 In Proc. 7th IEEE Symp. on Logic in Computer Science
, 1994
"... Given a knowledge base KB containing firstorder and statistical facts, we consider a principled method, called the randomworlds method, for computing a degree of belief that some formula ' holds given KB . If we are reasoning about a world or system consisting of N individuals, then we can conside ..."
Abstract

Cited by 49 (12 self)
 Add to MetaCart
Given a knowledge base KB containing firstorder and statistical facts, we consider a principled method, called the randomworlds method, for computing a degree of belief that some formula ' holds given KB . If we are reasoning about a world or system consisting of N individuals, then we can consider all possible worlds, or firstorder models, with domain f1; : : : ; Ng that satisfy KB , and compute the fraction of them in which ' is true. We define the degree of belief to be the asymptotic value of this fraction as N grows large. We show that when the vocabulary underlying ' and KB uses constants and unary predicates only, we can naturally associate an entropy with each world. As N grows larger, there are many more worlds with higher entropy. Therefore, we can use a maximumentropy computation to compute the degree of belief. This result is in a similar spirit to previous work in physics and artificial intelligence, but is far more general. Of equal interest to the result itself are...
Probabilistic Logic Programming
 In Proc. of the 13th European Conf. on Artificial Intelligence (ECAI98
, 1998
"... . We present a new approach to probabilistic logic programs with a possible worlds semantics. Classical program clauses are extended by a subinterval of [0; 1] that describes the range for the conditional probability of the head of a clause given its body. We show that deduction in the defined proba ..."
Abstract

Cited by 45 (11 self)
 Add to MetaCart
. We present a new approach to probabilistic logic programs with a possible worlds semantics. Classical program clauses are extended by a subinterval of [0; 1] that describes the range for the conditional probability of the head of a clause given its body. We show that deduction in the defined probabilistic logic programs is computationally more complex than deduction in classical logic programs. More precisely, restricted deduction problems that are Pcomplete for classical logic programs are already NPhard for probabilistic logic programs. We then elaborate a linear programming approach to probabilistic deduction that is efficient in interesting special cases. In the best case, the generated linear programs have a number of variables that is linear in the number of ground instances of purely probabilistic clauses in a probabilistic logic program. 1 INTRODUCTION There is already a quite extensive literature on probabilistic propositional logics and their various dialects. The most fa...