Results 1  10
of
13
Foundations for Bayesian networks
, 2001
"... Bayesian networks are normally given one of two types of foundations: they are either treated purely formally as an abstract way of representing probability functions, or they are interpreted, with some causal interpretation given to the graph in a network and some standard interpretation of probabi ..."
Abstract

Cited by 11 (7 self)
 Add to MetaCart
Bayesian networks are normally given one of two types of foundations: they are either treated purely formally as an abstract way of representing probability functions, or they are interpreted, with some causal interpretation given to the graph in a network and some standard interpretation of probability given to the probabilities specified in the network. In this chapter I argue that current foundations are problematic, and put forward new foundations which involve aspects of both the interpreted and the formal approaches. One standard approach is to interpret a Bayesian network objectively: the graph in a Bayesian network represents causality in the world and the specified probabilities are objective, empirical probabilities. Such an interpretation founders when the Bayesian network independence assumption (often called the causal Markov condition) fails to hold. In §2 I catalogue the occasions when the independence assumption fails, and show that such failures are pervasive. Next, in §3, I show that even where the independence assumption does hold objectively, an agent’s causal knowledge is unlikely to satisfy the assumption with respect to her subjective probabilities, and that slight differences between an agent’s subjective Bayesian network and an objective Bayesian network can lead to large differences between probability distributions determined by these networks. To overcome these difficulties I put forward logical Bayesian foundations in §5. I show that if the graph and probability specification in a Bayesian network are thought of as an agent’s background knowledge, then the agent is most rational if she adopts the probability distribution determined by the
Maximum entropy probabilistic logic
, 2002
"... Recent research has shown there are two types of uncertainty that can be expressed in firstorder logic— propositional and statistical uncertainty—and that both types can be represented in terms of probability spaces. However, these efforts have fallen short of providing a general account of how to ..."
Abstract

Cited by 8 (0 self)
 Add to MetaCart
Recent research has shown there are two types of uncertainty that can be expressed in firstorder logic— propositional and statistical uncertainty—and that both types can be represented in terms of probability spaces. However, these efforts have fallen short of providing a general account of how to design probability measures for these spaces; as a result, we lack a crucial component of any system that reasons under these types of uncertainty. In this paper, we describe an automatic procedure for defining such measures in terms of a probabilistic knowledge base. In particular, we employ the principle of maximum entropy to select measures that are consistent with our knowledge and that make the fewest assumptions in doing so. This approach yields models of firstorder uncertainty that are principled, intuitive, and economical in their representation.
Default Reasoning Using Maximum Entropy and Variable Strength Defaults
, 1999
"... The thesis presents a computational model for reasoning with partial information which uses default rules or information about what normally happens. The idea is to provide a means of filling the gaps in an incomplete world view with the most plausible assumptions while allowing for the retraction o ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
The thesis presents a computational model for reasoning with partial information which uses default rules or information about what normally happens. The idea is to provide a means of filling the gaps in an incomplete world view with the most plausible assumptions while allowing for the retraction of conclusions should they subsequently turn out to be incorrect. The model can be used both to reason from a given knowledge base of default rules, and to aid in the construction of such knowledge bases by allowing their designer to compare the consequences of his design with his own default assumptions. The conclusions supported by the proposed model are justified by the use of a probabilistic semantics for default rules in conjunction with the application of a rational means of inference from incomplete knowledgethe principle of maximum entropy (ME). The thesis develops both the theory and algorithms for the ME approach and argues that it should be considered as a general theory of default reasoning. The argument supporting the thesis has two main threads. Firstly, the ME approach is tested on the benchmark examples required of nonmonotonic behaviour, and it is found to handle them appropriately. Moreover, these patterns of commonsense reasoning emerge as consequences of the chosen semantics rather than being design features. It is argued that this makes the ME approach more objective, and its conclusions more justifiable, than other default systems. Secondly, the ME approach is compared with two existing systems: the lexicographic approach (LEX) and system Z + . It is shown that the former can be equated with ME under suitable conditions making it strictly less expressive, while the latter is too crude to perform the subtle resolution of default conflict which the ME...
A representation theorem and applications to measure selection and noninformative priors
"... We introduce a set of transformations on the set of all probability distributions over a finite state space, and show that these transformations are the only ones that preserve certain elementary probabilistic relationships. This result provides a new perspective on a variety of probabilistic infere ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
We introduce a set of transformations on the set of all probability distributions over a finite state space, and show that these transformations are the only ones that preserve certain elementary probabilistic relationships. This result provides a new perspective on a variety of probabilistic inference problems in which invariance considerations play a role. Two particular applications we consider in this paper are the development of an equivariancebased approach to the problem of measure selection, and a new justification for Haldane’s prior as the distribution that encodes prior ignorance about the parameter of a multinomial distribution. 1.
Merging Intelligent Agency and the Semantic Web
"... Abstract. The semantic web makes unique demands on agency. Such agents should: be built around an ontology and should take advantage of the relations in it, be based on a grounded approach to uncertainty, be able to deal naturally with the issue of semantic alignment, and deal with interaction in a ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
Abstract. The semantic web makes unique demands on agency. Such agents should: be built around an ontology and should take advantage of the relations in it, be based on a grounded approach to uncertainty, be able to deal naturally with the issue of semantic alignment, and deal with interaction in a way that is suited to the coordination of services. A new breed of ‘informationbased’ intelligent agents [1] meets these demands. This form of agency is founded on ideas from information theory, and was inspired by the insight that interaction is an information revelation and discovery process. Ontologies are fundamental to these agent’s reasoning that relies on semantic distance measures. They employ entropybased inference, a form of Bayesian inference, to manage uncertainty that they represent using probability distributions. Semantic alignment is managed through a negotiation process during which the agent’s uncertain beliefs are continually revised. The coordination of services is achieved by modelling interaction as timeconstrained, resourceconstrained processes — a proven application of agent technology. In addition, measures of trust, reputation, and reliability are unified in a single model. 1
Objective Bayesianism and the Maximum Entropy Principle
, 2013
"... For Maximum Entropy and Bayes Theorem, a special issue of Entropy journal. Objective Bayesian epistemology invokes three norms: the strengths of our beliefs should be probabilities, they should be calibrated to our evidence of physical probabilities, and they should otherwise equivocate sufficiently ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
For Maximum Entropy and Bayes Theorem, a special issue of Entropy journal. Objective Bayesian epistemology invokes three norms: the strengths of our beliefs should be probabilities, they should be calibrated to our evidence of physical probabilities, and they should otherwise equivocate sufficiently between the basic propositions that we can express. The three norms are sometimes explicated by appealing to the maximum entropy principle, which says that a belief function should be a probability function, from all those that are calibrated to evidence, that has maximum entropy. However, the three norms of objective Bayesianism are usually justified in different ways. In this paper we show that the three norms can all be subsumed under a single justification in terms of minimising worstcase expected loss. This, in turn, is equivalent to maximising a generalised notion of entropy. We suggest that requiring language invariance, in addition to minimising worstcase expected loss, motivates maximisation of standard entropy as opposed to maximisation of other instances of generalised entropy. Our argument also provides a qualified justification for updating degrees of belief by Bayesian conditionalisation. However, conditional probabilities play a less central part in the objective Bayesian account than they do under the subjective view of Bayesianism, leading to a reduced role for Bayes ’ Theorem. Contents
Maximum Entropy Probabilistic Logic
"... Recent research has shown there are two types of uncertainty that can be expressed in firstorder logic propositional and statistical uncertaintyand that both types can be represented in terms of probability spaces. ..."
Abstract
 Add to MetaCart
Recent research has shown there are two types of uncertainty that can be expressed in firstorder logic propositional and statistical uncertaintyand that both types can be represented in terms of probability spaces.
InformationBased Reputation
"... Abstract. Informationbased agents use tools from information theory to evaluate their utterances and to build their world model. When embedded in a social network these agents measure the strength of information flow in this sense. This leads to a model of informationbased reputation in which agen ..."
Abstract
 Add to MetaCart
Abstract. Informationbased agents use tools from information theory to evaluate their utterances and to build their world model. When embedded in a social network these agents measure the strength of information flow in this sense. This leads to a model of informationbased reputation in which agents share opinions, and observe the way in which their opinions effect the opinions of others. A method is proposed that supports the deliberative process of combining opinions into a group’s reputation. The reliability of agents as opinion givers are measured in terms of the extent to which their opinions differ from that of the group reputation. These reliability measures are used to form an a priori reputation estimate given the individual opinions of a set of independent agents. 1
Negotiating Intelligently
"... Abstract. The predominant approaches to automating competitive interaction appeal to the central notion of a utility function that represents an agent’s preferences. Agent’s are then endowed with machinery that enables them to perform actions that are intended to optimise their expected utility. Des ..."
Abstract
 Add to MetaCart
Abstract. The predominant approaches to automating competitive interaction appeal to the central notion of a utility function that represents an agent’s preferences. Agent’s are then endowed with machinery that enables them to perform actions that are intended to optimise their expected utility. Despite the extent of this work, the deployment of automatic negotiating agents in real world scenarios is rare. We propose that utility functions, or preference orderings, are often not known with certainty; further, the uncertainty that underpins them is typically in a state of flux. We propose that the key to building intelligent negotiating agents is to take an agent’s historic observations as primitive, to model that agent’s changing uncertainty in that information, and to use that model as the foundation for the agent’s reasoning. We describe an agent architecture, with an attendant theory, that is based on that model. In this approach, the utility of contracts, and the trust and reliability of a trading partner are intermediate concepts that an agent may estimate from its information model. This enables us to describe intelligent agents that are not necessarily utility optimisers, that value information as a commodity, and that build relationships with other agents through the trusted exchange of information as well as contracts. 1
A Map of Trust Between Trading Partners
"... Abstract. A pair of ‘trust maps ’ give a finegrained view of an agent’s accumulated, timediscounted belief that the enactment of commitments by another agent will be inline with what was promised, and that the observed agent will act in a way that respects the confidentiality of previously passed ..."
Abstract
 Add to MetaCart
Abstract. A pair of ‘trust maps ’ give a finegrained view of an agent’s accumulated, timediscounted belief that the enactment of commitments by another agent will be inline with what was promised, and that the observed agent will act in a way that respects the confidentiality of previously passed information. The structure of these maps is defined in terms of a categorisation of utterances and the ontology. Various summary measures are then applied to these maps to give a succinct view of trust. 1