Results 1  10
of
70
A Logic of Argumentation for Reasoning under Uncertainty.
 Computational Intelligence
, 1995
"... We present the syntax and proof theory of a logic of argumentation, LA. We also outline the development of a category theoretic semantics for LA. LA is the core of a proof theoretic model for reasoning under uncertainty. In this logic, propositions are labelled with a representation of the arguments ..."
Abstract

Cited by 107 (3 self)
 Add to MetaCart
We present the syntax and proof theory of a logic of argumentation, LA. We also outline the development of a category theoretic semantics for LA. LA is the core of a proof theoretic model for reasoning under uncertainty. In this logic, propositions are labelled with a representation of the arguments which support their validity. Arguments may then be aggregated to collect more information about the potential validity of the propositions of interest. We make the notion of aggregation primitive to the logic, and then define strength mappings from sets of arguments to one of a number of possible dictionaries. This provides a uniform framework which incorporates a number of numerical and symbolic techniques for assigning subjective confidences to propositions on the basis of their supporting arguments. These aggregation techniques are also described, with examples. Key words: Uncertain reasoning, epistemic probability, argumentation, nonclassical logics, nonmonotonic reasoning 1. Introd...
Plausibility Measures and Default Reasoning
 Journal of the ACM
, 1996
"... this paper: default reasoning. In recent years, a number of different semantics for defaults have been proposed, such as preferential structures, fflsemantics, possibilistic structures, and rankings, that have been shown to be characterized by the same set of axioms, known as the KLM properties. W ..."
Abstract

Cited by 79 (12 self)
 Add to MetaCart
this paper: default reasoning. In recent years, a number of different semantics for defaults have been proposed, such as preferential structures, fflsemantics, possibilistic structures, and rankings, that have been shown to be characterized by the same set of axioms, known as the KLM properties. While this was viewed as a surprise, we show here that it is almost inevitable. In the framework of plausibility measures, we can give a necessary condition for the KLM axioms to be sound, and an additional condition necessary and sufficient to ensure that the KLM axioms are complete. This additional condition is so weak that it is almost always met whenever the axioms are sound. In particular, it is easily seen to hold for all the proposals made in the literature. Categories and Subject Descriptors: F.4.1 [Mathematical Logic and Formal Languages]:
The Paradoxical Success of Fuzzy Logic
 IEEE Expert
, 1993
"... Applications of fuzzy logic in heuristic control have been highly successful, but which aspects of fuzzy logic are essential to its practical usefulness? This paper shows that an apparently reasonable version of fuzzy logic collapses mathematically to twovalued logic. Moreover, there are few if any ..."
Abstract

Cited by 69 (1 self)
 Add to MetaCart
Applications of fuzzy logic in heuristic control have been highly successful, but which aspects of fuzzy logic are essential to its practical usefulness? This paper shows that an apparently reasonable version of fuzzy logic collapses mathematically to twovalued logic. Moreover, there are few if any published reports of expert systems in realworld use that reason about uncertainty using fuzzy logic. It appears that the limitations of fuzzy logic have not been detrimental in control applications because current fuzzy controllers are far simpler than other knowledgebased systems. In the future, the technical limitations of fuzzy logic can be expected to become important in practice, and work on fuzzy controllers will also encounter several problems of scale already known for other knowledgebased systems. 1
Probabilistic argumentation systems
 Handbook of Defeasible Reasoning and Uncertainty Management Systems, Volume 5: Algorithms for Uncertainty and Defeasible Reasoning
, 2000
"... Different formalisms for solving problems of inference under uncertainty have been developed so far. The most popular numerical approach is the theory of Bayesian inference [42]. More general approaches are the DempsterShafer theory of evidence [51], and possibility theory [16], which is closely re ..."
Abstract

Cited by 54 (33 self)
 Add to MetaCart
Different formalisms for solving problems of inference under uncertainty have been developed so far. The most popular numerical approach is the theory of Bayesian inference [42]. More general approaches are the DempsterShafer theory of evidence [51], and possibility theory [16], which is closely related to fuzzy systems.
Updating Probabilities
, 2002
"... As examples such as the Monty Hall puzzle show, applying conditioning to update a probability distribution on a "naive space", which does not take into account the protocol used, can often lead to counterintuitive results. Here we examine why. A criterion known as CAR ("coarsening at random") in t ..."
Abstract

Cited by 52 (6 self)
 Add to MetaCart
As examples such as the Monty Hall puzzle show, applying conditioning to update a probability distribution on a "naive space", which does not take into account the protocol used, can often lead to counterintuitive results. Here we examine why. A criterion known as CAR ("coarsening at random") in the statistical literature characterizes when "naive" conditioning in a naive space works. We show that the CAR condition holds rather infrequently, and we provide a procedural characterization of it, by giving a randomized algorithm that generates all and only distributions for which CAR holds. This substantially extends previous characterizations of CAR. We also consider more generalized notions of update such as Jeffrey conditioning and minimizing relative entropy (MRE). We give a generalization of the CAR condition that characterizes when Jeffrey conditioning leads to appropriate answers, and show that there exist some very simple settings in which MRE essentially never gives the right results. This generalizes and interconnects previous results obtained in the literature on CAR and MRE.
DempsterShafer Theory for Sensor Fusion in Autonomous Mobile Robots
 IEEE Transactions on Robotics and Automation
"... This article presents the uncertainty management system used for the execution activity of the Sensor Fusion Effects (SFX) architecture. The SFX architecture is a generic sensor fusion system for autonomous mobile robots, suitable for a wide variety of sensors and environments. The execution acti ..."
Abstract

Cited by 44 (5 self)
 Add to MetaCart
This article presents the uncertainty management system used for the execution activity of the Sensor Fusion Effects (SFX) architecture. The SFX architecture is a generic sensor fusion system for autonomous mobile robots, suitable for a wide variety of sensors and environments. The execution activity uses the belief generated for a percept to either proceed with a task safely (e.g., navigate to a specific location), terminate the task (e.g., can't recognize the location), or investigate the situation further in the hopes of obtaining sufficient belief (e.g., what has changed?). DempsterShafer (DS) theory serves as the foundation for uncertainty management. The SFX implementation of DS theory incorporates evidence from sensor observations and domain knowledge into three levels of perceptual abstraction. It also makes use of the DS weight of conflict metric to prevent the robot from acting on faulty observations. Experiments with four types of sensor data collected by a mobil...
Localized Partial Evaluation of Belief Networks
, 1995
"... Most algorithms for propagating evidence through belief networks have been exact and exhaustive: they produce an exact (pointvalued) marginal probability for every node in the network. Often, however, an application will not need information about every node in the network nor will it need exact pr ..."
Abstract

Cited by 43 (1 self)
 Add to MetaCart
Most algorithms for propagating evidence through belief networks have been exact and exhaustive: they produce an exact (pointvalued) marginal probability for every node in the network. Often, however, an application will not need information about every node in the network nor will it need exact probabilities. We present the localized partial evaluation (LPE) propagation algorithm, which computes interval bounds on the marginal probability of a specified query node by examining a subset of the nodes in the entire network. Conceptually, LPE ignores parts of the network that are "too far away" from the queried node to have much impact on its value. LPE has the "anytime" property of being able to produce better solutions (tighter intervals) given more time to consider more of the network. 1 Introduction Belief networks provide a way of encoding knowledge about the probabilistic dependencies and independencies of a set of variables in some domain. Variables are encoded as nodes in the ne...
Generic Fuzzy Reasoning Nets as a Basis for Reverse Engineering Relational Database Applications
 IN PROC. OF EUROPEAN SOFTWARE ENGINEERING CONFERENCE (ESEC/FSE), NUMBER 1302 IN LNCS
, 1997
"... Objectoriented technology has become mature enough to satisfy many new requirements coming from areas like computeraided design (CAD), computerintegrated manufacturing (CIM), or software engineering (SE). However, a competitive information management infrastructure often demands to merge data fro ..."
Abstract

Cited by 34 (15 self)
 Add to MetaCart
Objectoriented technology has become mature enough to satisfy many new requirements coming from areas like computeraided design (CAD), computerintegrated manufacturing (CIM), or software engineering (SE). However, a competitive information management infrastructure often demands to merge data from CAD, CIM, or SEsystems with business data stored in a relational system. One approach for seamless integration of objectoriented and relational systems is to migrate from a relational to an objectoriented system. The first step in this migration process is reverse engineering of the legacy database. In this paper we propose a new graphical and executable language called Generic Fuzzy Reasoning Nets for modelling and applying reverse engineering knowledge. In particular, this language enables to define and analyse fuzzy knowledge which is usually all what is available when an existing database schema has to be reverse engineered into an objectoriented one. The analysis process is base...
Defining Relative Likelihood in PartiallyOrdered Preferential Structures
 Journal of Artificial Intelligence Research
, 1997
"... Starting with a likelihood or preference order on worlds, we extend it to a likelihood ordering on sets of worlds in a natural way, and examine the resulting logic. Lewis earlier considered such a notion of relative likelihood in the context of studying counterfactuals, but he assumed a total prefer ..."
Abstract

Cited by 30 (1 self)
 Add to MetaCart
Starting with a likelihood or preference order on worlds, we extend it to a likelihood ordering on sets of worlds in a natural way, and examine the resulting logic. Lewis earlier considered such a notion of relative likelihood in the context of studying counterfactuals, but he assumed a total preference order on worlds. Complications arise when examining partial orders that are not present for total orders. There are subtleties involving the exact approach to lifting the order on worlds to an order on sets of worlds. In addition, the axiomatization of the logic of relative likelihood in the case of partial orders gives insight into the connection between relative likelihood and default reasoning. 1. Introduction A preference order on a set W of worlds is a reflexive, transitive relation on W . Various readings have been given to the relation in the literature; u v has been interpreted as "u at least as preferred or desirable as v" (Kraus, Lehmann, & Magidor, 1990; Doyle, Shoham, & ...
What is a Forest? On the vagueness of certain geographic concepts
 Topoi
, 2002
"... The paper examines ways in which the meanings of geographical concepts are affected by the phenomenon of vagueness. A logical analysis based on the theory of supervaluation semantics is developed and employed to describe differences and logical dependencies between different senses of vague concepts ..."
Abstract

Cited by 27 (2 self)
 Add to MetaCart
The paper examines ways in which the meanings of geographical concepts are affected by the phenomenon of vagueness. A logical analysis based on the theory of supervaluation semantics is developed and employed to describe differences and logical dependencies between different senses of vague concepts. Particular attention is given to analysing the concept of `forest' which exhibits many kinds of vagueness.