Results 1  10
of
50
1921]), A Treatise on Probability
, 2004
"... Los paradigmas económicos de Ludwig von Mises por una parte, y de John Maynard Keynes por otra, han sido correctamente reconocidos como contradictorias a nivel teórico, y como antagonistas, con respecto a sus implicancias políticas prácticas y públicas. Desde el punto de vista característico también ..."
Abstract

Cited by 364 (0 self)
 Add to MetaCart
Los paradigmas económicos de Ludwig von Mises por una parte, y de John Maynard Keynes por otra, han sido correctamente reconocidos como contradictorias a nivel teórico, y como antagonistas, con respecto a sus implicancias políticas prácticas y públicas. Desde el punto de vista característico también han sido reivindicadas por sectores de oposición del espectro político. Aún así, las respectivas visiones de estos autores con respecto al significado e interpretación de la probabilidad, muestra una afinidad conceptual más estrecha que los que se ha reconocido en la literatura. Se ha argumentado especialmente que en algunos aspectos importantes, la interpretación de Ludwig von Mises del concepto de probabilidad, muestra una estrecha afinidad con la interpretación de probabilidad desarrollada por su oponente John Maynard Keynes, que con las maneras de ver la probabilidad respaldadas por su hermano Richard von Mises. Sin embargo, también existen grandes diferencias entre los puntos de vista de Ludwig von Mises y aquellos de John Maynard Keynes con respecto a la probabilidad. Uno de ellos se destaca principalmente: cuando John Maynard Keynes aboga por un punto de vista monista de la probabilidad, Ludwig von Mises defiende un punto de vista dualista de la probabilidad, de acuerdo con lo cual, el concepto de probabilidad recibe dos significados diferentes, y en donde cada uno de ellos es válido en un área o contexto en particular. Se concluye que tanto John Maynard Keynes como Ludwig von Mises presentan puntos de vista claramente diferenciados con respecto al significado e interpretación de la probabilidad.
Betting on Theories
, 1993
"... Predictions about the future and unrestricted universal generalizations are never logically implied by our observational evidence, which is limited to particular facts in the present and past. Nevertheless, propositions of these and other kinds are often said to be confirmed by observational evidenc ..."
Abstract

Cited by 70 (4 self)
 Add to MetaCart
Predictions about the future and unrestricted universal generalizations are never logically implied by our observational evidence, which is limited to particular facts in the present and past. Nevertheless, propositions of these and other kinds are often said to be confirmed by observational evidence. A natural place to begin the study of confirmation theory is to consider what it means to say that some evidence E confirms a hypothesis H. Incremental and absolute confirmation Let us say that E raises the probability of H if the probability of H given E is higher than the probability of H not given E. According to many confirmation theorists, “E confirms H ” means that E raises the probability of H. This conception of confirmation will be called incremental confirmation. Let us say that H is probable given E if the probability of H given E is above some threshold. (This threshold remains to be specified but is assumed to be at least one half.) According to some confirmation theorists, “E confirms H ” means that H is probable given E. This conception of confirmation will be called absolute confirmation. Confirmation theorists have sometimes failed to distinguish these two concepts. For example, Carl Hempel in his classic “Studies in the Logic of Confirmation ” endorsed the following principles: (1) A generalization of the form “All F are G ” is confirmed by the evidence that there is an individual that is both F and G. (2) A generalization of that form is also confirmed by the evidence that there is an individual that is neither F nor G. (3) The hypotheses confirmed by a piece of evidence are consistent with one another. (4) If E confirms H then E confirms every logical consequence of H. Principles (1) and (2) are not true of absolute confirmation. Observation of a single thing that is F and G cannot in general make it probable that all F are G; likewise for an individual that is neither
Severe Testing as a Basic Concept in a NeymanPearson Philosophy of Induction
 BRITISH JOURNAL FOR THE PHILOSOPHY OF SCIENCE
, 2006
"... Despite the widespread use of key concepts of the Neyman–Pearson (N–P) statistical paradigm—type I and II errors, significance levels, power, confidence levels—they have been the subject of philosophical controversy and debate for over 60 years. Both current and longstanding problems of N–P tests s ..."
Abstract

Cited by 36 (14 self)
 Add to MetaCart
Despite the widespread use of key concepts of the Neyman–Pearson (N–P) statistical paradigm—type I and II errors, significance levels, power, confidence levels—they have been the subject of philosophical controversy and debate for over 60 years. Both current and longstanding problems of N–P tests stem from unclarity and confusion, even among N–P adherents, as to how a test’s (predata) error probabilities are to be used for (postdata) inductive inference as opposed to inductive behavior. We argue that the relevance of error probabilities is to ensure that only statistical hypotheses that have passed severe or probative tests are inferred from the data. The severity criterion supplies a metastatistical principle for evaluating proposed statistical inferences, avoiding classic fallacies from tests that are overly sensitive, as well as those not sensitive enough to particular errors and discrepancies.
Inductive influence
 British Journal for the Philosophy of Science
"... Objective Bayesianism has been criticised for not allowing learning from experience: it is claimed that an agent must give degree of belief 1 to the next raven being black, however many other black ravens have 2 been observed. I argue that this objection can be overcome by appealing to objective Bay ..."
Abstract

Cited by 9 (7 self)
 Add to MetaCart
Objective Bayesianism has been criticised for not allowing learning from experience: it is claimed that an agent must give degree of belief 1 to the next raven being black, however many other black ravens have 2 been observed. I argue that this objection can be overcome by appealing to objective Bayesian nets, a formalism for representing objective Bayesian degrees of belief. Under this account, previous observations exert an inductive influence on the next observation. I show how this approach can be used to capture the JohnsonCarnap continuum of inductive methods, as well as the NixParis continuum, and show how inductive influence can
Objective Bayesianism, Bayesian Conditionalisation
, 2008
"... Objective Bayesianism has been criticised on the grounds that objective Bayesian updating, which on a finite outcome space appeals to the maximum entropy principle, differs from Bayesian conditionalisation. The main task of this paper is to show that this objection backfires: the difference between ..."
Abstract

Cited by 9 (7 self)
 Add to MetaCart
Objective Bayesianism has been criticised on the grounds that objective Bayesian updating, which on a finite outcome space appeals to the maximum entropy principle, differs from Bayesian conditionalisation. The main task of this paper is to show that this objection backfires: the difference between the two forms of updating reflects negatively on Bayesian conditionalisation rather than on objective Bayesian updating. The paper also reviews some existing criticisms and justifications of conditionalisation, arguing in particular that the diachronic Dutch book justification fails because diachronic Dutch book arguments are subject to a reductio: in certain circumstances one can Dutch book an agent however she changes her degrees of belief. One may also criticise objective Bayesianism on the grounds that its norms are not compulsory but voluntary, the result of a stance. It is argued that this second objection also misses the mark, since objective
Philosophies of probability: objective Bayesianism and its challenges
 Handbook of the philosophy of mathematics. Elsevier, Amsterdam. Handbook of the Philosophy of Science
, 2004
"... This chapter presents an overview of the major interpretations of probability followed by an outline of the objective Bayesian interpretation and a discussion of the key challenges it faces. ..."
Abstract

Cited by 8 (5 self)
 Add to MetaCart
This chapter presents an overview of the major interpretations of probability followed by an outline of the objective Bayesian interpretation and a discussion of the key challenges it faces.
Causal pluralism versus epistemic causality
, 2007
"... It is tempting to analyse causality in terms of just one of the indicators of causal relationships, e.g., mechanisms, probabilistic dependencies or independencies, counterfactual conditionals or agency considerations. While such an analysis will surely shed light on some aspect of our concept of cau ..."
Abstract

Cited by 7 (5 self)
 Add to MetaCart
It is tempting to analyse causality in terms of just one of the indicators of causal relationships, e.g., mechanisms, probabilistic dependencies or independencies, counterfactual conditionals or agency considerations. While such an analysis will surely shed light on some aspect of our concept of cause, it will fail to capture the whole, rather multifarious, notion. So one might instead plump for pluralism: a different analysis for a different occasion. But we do not seem to have lots of different kinds of cause—just one eclectic notion. The resolution of this conundrum, I think, requires us to accept that our causal beliefs are generated by a wide variety of indicators, but to deny that this variety of indicators yields a variety of concepts of cause. This focus on the relation between evidence and causal beliefs leads to what I call epistemic causality. Under this view, certain causal beliefs are appropriate or rational on the basis of observed evidence; our notion of cause can be understood purely in terms of these rational
Basic Elements and Problems of Probability Theory
, 1999
"... After a brief review of ontic and epistemic descriptions, and of subjective, logical and statistical interpretations of probability, we summarize the traditional axiomatization of calculus of probability in terms of Boolean algebras and its settheoretical realization in terms of Kolmogorov probabil ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
After a brief review of ontic and epistemic descriptions, and of subjective, logical and statistical interpretations of probability, we summarize the traditional axiomatization of calculus of probability in terms of Boolean algebras and its settheoretical realization in terms of Kolmogorov probability spaces. Since the axioms of mathematical probability theory say nothing about the conceptual meaning of “randomness” one considers probability as property of the generating conditions of a process so that one can relate randomness with predictability (or retrodictability). In the measuretheoretical codification of stochastic processes genuine chance processes can be defined rigorously as socalled regular processes which do not allow a longterm prediction. We stress that stochastic processes are equivalence classes of individual point functions so that they do not refer to individual processes but only to an ensemble of statistically equivalent individual processes. Less popular but conceptually more important than statistical descriptions are individual descriptions which refer to individual chaotic processes. First, we review the individual description based on the generalized harmonic analysis by Norbert Wiener. It allows the definition of individual purely chaotic processes which can be interpreted as trajectories of regular statistical stochastic processes. Another individual description refers to algorithmic procedures which connect the intrinsic randomness of a finite sequence with the complexity of the shortest program necessary to produce the sequence. Finally, we ask why there can be laws of chance. We argue that random events fulfill the laws of chance if and only if they can be reduced to (possibly hidden) deterministic events. This mathematical result may elucidate the fact that not all nonpredictable events can be grasped by the methods of mathematical probability theory.
Objective Bayesianism with predicate languages. Synthese
, 2008
"... Objective Bayesian probability is often defined over rather simple domains, e.g., finite event spaces or propositional languages. This paper investigates the extension of objective Bayesianism to firstorder logical languages. It is argued that the objective Bayesian should choose a probability func ..."
Abstract

Cited by 5 (5 self)
 Add to MetaCart
Objective Bayesian probability is often defined over rather simple domains, e.g., finite event spaces or propositional languages. This paper investigates the extension of objective Bayesianism to firstorder logical languages. It is argued that the objective Bayesian should choose a probability function, from all those that satisfy constraints imposed by background knowledge, that is closest to a particular frequencyinduced probability function which generalises the λ = 0 function of Carnap’s continuum of inductive methods.