Results 1  10
of
47
A Counterexample to Theorems of Cox and Fine
 JOURNAL OF ARTIFICIAL INTELLIGENCE RESEARCH
, 1999
"... Cox's wellknown theorem justifying the use of probability is shown not to hold infinite domains. The counterexample also suggests that Cox's assumptions are insu cient to prove the result even in infinite domains. The same counterexample is used to disprove a result of Fine on comparative condition ..."
Abstract

Cited by 34 (2 self)
 Add to MetaCart
Cox's wellknown theorem justifying the use of probability is shown not to hold infinite domains. The counterexample also suggests that Cox's assumptions are insu cient to prove the result even in infinite domains. The same counterexample is used to disprove a result of Fine on comparative conditional probability.
Representation Dependence in Probabilistic Inference
 JOURNAL OF ARTIFICIAL INTELLIGENCE RESEARCH
, 2004
"... Nondeductive reasoning systems are often representation dependent: representing the same situation in two different ways may cause such a system to return two different answers. Some have viewed ..."
Abstract

Cited by 20 (1 self)
 Add to MetaCart
Nondeductive reasoning systems are often representation dependent: representing the same situation in two different ways may cause such a system to return two different answers. Some have viewed
Probabilistic logic and probabilistic networks
, 2008
"... While in principle probabilistic logics might be applied to solve a range of problems, in practice they are rarely applied at present. This is perhaps because they seem disparate, complicated, and computationally intractable. However, we shall argue in this programmatic paper that several approaches ..."
Abstract

Cited by 19 (15 self)
 Add to MetaCart
While in principle probabilistic logics might be applied to solve a range of problems, in practice they are rarely applied at present. This is perhaps because they seem disparate, complicated, and computationally intractable. However, we shall argue in this programmatic paper that several approaches to probabilistic logic fit into a simple unifying framework: logically complex evidence can be used to associate probability intervals or probabilities with sentences. Specifically, we show in Part I that there is a natural way to present a question posed in probabilistic logic, and that various inferential procedures provide semantics for that question: the standard probabilistic semantics (which takes probability functions as models), probabilistic argumentation (which considers the probability of a hypothesis being a logical consequence of the available evidence), evidential probability (which handles reference classes and frequency data), classical statistical inference
Objective Bayesian nets
 We Will Show Them! Essays in Honour of Dov Gabbay
, 2005
"... I present a formalism that combines two methodologies: objective Bayesianism and Bayesian nets. According to objective Bayesianism, an agent’s degrees of belief (i) ought to satisfy the axioms of probability, (ii) ought to satisfy constraints imposed by background knowledge, and (iii) should otherwi ..."
Abstract

Cited by 13 (11 self)
 Add to MetaCart
I present a formalism that combines two methodologies: objective Bayesianism and Bayesian nets. According to objective Bayesianism, an agent’s degrees of belief (i) ought to satisfy the axioms of probability, (ii) ought to satisfy constraints imposed by background knowledge, and (iii) should otherwise be as noncommittal as possible (i.e. have maximum entropy). Bayesian nets offer an efficient way of representing and updating probability functions. An objective Bayesian net is a Bayesian net representation of the maximum entropy probability function. I show how objective Bayesian nets can be constructed, updated and combined, and how they can deal with cases in which the agent’s background knowledge includes knowledge of qualitative influence relationships, e.g. causal influences. I then sketch a number of applications of the resulting formalism, showing how it can shed light on probability logic, causal modelling, logical reasoning, semantic reasoning, argumentation
Cox's Theorem Revisited
 Journal of Artificial Intelligence Research
, 1999
"... The assumptions needed to prove Cox's Theorem are discussed and examined. Various sets of assumptions under which a Coxstyle theorem can be proved are provided, although all are rather strong and, arguably, not natural. I recently wrote a paper (Halpern, 1999) casting doubt on how compelling a jus ..."
Abstract

Cited by 12 (0 self)
 Add to MetaCart
The assumptions needed to prove Cox's Theorem are discussed and examined. Various sets of assumptions under which a Coxstyle theorem can be proved are provided, although all are rather strong and, arguably, not natural. I recently wrote a paper (Halpern, 1999) casting doubt on how compelling a justification for probability is provided by Cox's celebrated theorem (Cox, 1946). I have received (what seems to me, at least) a surprising amount of response to that article. Here I attempt to clarify the degree to which I think Cox's theorem can be salvaged and respond to a glaring inaccuracy on my part pointed out by Snow (1998). (Fortunately, it is an inaccuracy that has no affect on either the correctness or the interpretation of the results of my paper.) I have tried to write this note with enough detail so that it can be read independently of (Halpern, 1999), but I encourage the reader to consult (Halpern, 1999), as well as the two major sources it is based on (Cox, 1946; Paris, 1994), ...
Foundations for Bayesian networks
, 2001
"... Bayesian networks are normally given one of two types of foundations: they are either treated purely formally as an abstract way of representing probability functions, or they are interpreted, with some causal interpretation given to the graph in a network and some standard interpretation of probabi ..."
Abstract

Cited by 11 (7 self)
 Add to MetaCart
Bayesian networks are normally given one of two types of foundations: they are either treated purely formally as an abstract way of representing probability functions, or they are interpreted, with some causal interpretation given to the graph in a network and some standard interpretation of probability given to the probabilities specified in the network. In this chapter I argue that current foundations are problematic, and put forward new foundations which involve aspects of both the interpreted and the formal approaches. One standard approach is to interpret a Bayesian network objectively: the graph in a Bayesian network represents causality in the world and the specified probabilities are objective, empirical probabilities. Such an interpretation founders when the Bayesian network independence assumption (often called the causal Markov condition) fails to hold. In §2 I catalogue the occasions when the independence assumption fails, and show that such failures are pervasive. Next, in §3, I show that even where the independence assumption does hold objectively, an agent’s causal knowledge is unlikely to satisfy the assumption with respect to her subjective probabilities, and that slight differences between an agent’s subjective Bayesian network and an objective Bayesian network can lead to large differences between probability distributions determined by these networks. To overcome these difficulties I put forward logical Bayesian foundations in §5. I show that if the graph and probability specification in a Bayesian network are thought of as an agent’s background knowledge, then the agent is most rational if she adopts the probability distribution determined by the
Inductive influence
 British Journal for the Philosophy of Science
"... Objective Bayesianism has been criticised for not allowing learning from experience: it is claimed that an agent must give degree of belief 1 to the next raven being black, however many other black ravens have 2 been observed. I argue that this objection can be overcome by appealing to objective Bay ..."
Abstract

Cited by 9 (7 self)
 Add to MetaCart
Objective Bayesianism has been criticised for not allowing learning from experience: it is claimed that an agent must give degree of belief 1 to the next raven being black, however many other black ravens have 2 been observed. I argue that this objection can be overcome by appealing to objective Bayesian nets, a formalism for representing objective Bayesian degrees of belief. Under this account, previous observations exert an inductive influence on the next observation. I show how this approach can be used to capture the JohnsonCarnap continuum of inductive methods, as well as the NixParis continuum, and show how inductive influence can
On the computational complexity of the numerically definite syllogistic and related logics
 Bulletin of Symbolic Logic
, 2008
"... In this paper, we determine the complexity of the satisfiability problem for various logics obtained by adding numerical quantifiers, and other constructions, to the traditional syllogistic. In addition, we demonstrate the incompleteness of some recently proposed proofsystems for these logics. 1 ..."
Abstract

Cited by 8 (6 self)
 Add to MetaCart
In this paper, we determine the complexity of the satisfiability problem for various logics obtained by adding numerical quantifiers, and other constructions, to the traditional syllogistic. In addition, we demonstrate the incompleteness of some recently proposed proofsystems for these logics. 1
Philosophies of probability: objective Bayesianism and its challenges
 Handbook of the philosophy of mathematics. Elsevier, Amsterdam. Handbook of the Philosophy of Science
, 2004
"... This chapter presents an overview of the major interpretations of probability followed by an outline of the objective Bayesian interpretation and a discussion of the key challenges it faces. ..."
Abstract

Cited by 8 (5 self)
 Add to MetaCart
This chapter presents an overview of the major interpretations of probability followed by an outline of the objective Bayesian interpretation and a discussion of the key challenges it faces.
A note on binary inductive logic
 JOURNAL OF PHILOSOPHICAL LOGIC
, 2007
"... We consider the problem of induction over languages containing binary relations and outline a way of interpreting and constructing a class of probability functions on the sentences of such a language. Some principles of inductive reasoning satisfied by these probability functions are discussed, lead ..."
Abstract

Cited by 6 (1 self)
 Add to MetaCart
We consider the problem of induction over languages containing binary relations and outline a way of interpreting and constructing a class of probability functions on the sentences of such a language. Some principles of inductive reasoning satisfied by these probability functions are discussed, leading in turn to a representation theorem for a more general class of probability functions satisfying these principles.