Results 1  10
of
14
Severe Testing as a Basic Concept in a NeymanPearson Philosophy of Induction
 BRITISH JOURNAL FOR THE PHILOSOPHY OF SCIENCE
, 2006
"... Despite the widespread use of key concepts of the Neyman–Pearson (N–P) statistical paradigm—type I and II errors, significance levels, power, confidence levels—they have been the subject of philosophical controversy and debate for over 60 years. Both current and longstanding problems of N–P tests s ..."
Abstract

Cited by 35 (14 self)
 Add to MetaCart
Despite the widespread use of key concepts of the Neyman–Pearson (N–P) statistical paradigm—type I and II errors, significance levels, power, confidence levels—they have been the subject of philosophical controversy and debate for over 60 years. Both current and longstanding problems of N–P tests stem from unclarity and confusion, even among N–P adherents, as to how a test’s (predata) error probabilities are to be used for (postdata) inductive inference as opposed to inductive behavior. We argue that the relevance of error probabilities is to ensure that only statistical hypotheses that have passed severe or probative tests are inferred from the data. The severity criterion supplies a metastatistical principle for evaluating proposed statistical inferences, avoiding classic fallacies from tests that are overly sensitive, as well as those not sensitive enough to particular errors and discrepancies.
AgentOriented Integration of Distributed Mathematical Services
 Journal of Universal Computer Science
, 1999
"... Realworld applications of automated theorem proving require modern software environments that enable modularisation, networked interoperability, robustness, and scalability. These requirements are met by the AgentOriented Programming paradigm of Distributed Artificial Intelligence. We argue that ..."
Abstract

Cited by 19 (10 self)
 Add to MetaCart
Realworld applications of automated theorem proving require modern software environments that enable modularisation, networked interoperability, robustness, and scalability. These requirements are met by the AgentOriented Programming paradigm of Distributed Artificial Intelligence. We argue that a reasonable framework for automated theorem proving in the large regards typical mathematical services as autonomous agents that provide internal functionality to the outside and that, in turn, are able to access a variety of existing external services. This article describes...
Information, relevance, and social decisionmaking: Some principles and results of decisiontheoretic semantics
 Logic, Language, and Computation
, 1999
"... Abstract. I propose to treat natural language semantics as a branch of pragmatics, identified in the way of C.S. Peirce, F.P. Ramsey, and R. Carnap as decisiontheory. The notion of relevance plays a key role. It is explicated traditionally, distinguished from a recent homophone, and applied in its ..."
Abstract

Cited by 16 (0 self)
 Add to MetaCart
Abstract. I propose to treat natural language semantics as a branch of pragmatics, identified in the way of C.S. Peirce, F.P. Ramsey, and R. Carnap as decisiontheory. The notion of relevance plays a key role. It is explicated traditionally, distinguished from a recent homophone, and applied in its natural framework of issuebased communication. Empirical emphasis is on implicature and presupposition. Several theorems are stated and made use of. Items analyzed include ‘or’, ‘not’, ‘but’, ‘even’, and ‘also’. I conclude on parts of mind. This paper submits an approach to meaning, with a focus on broadly nontruthconditional aspects of natural language. Semantics is treated as a branch of pragmatics, identified as decisiontheory in the way of C.S. Peirce, F.P. Ramsey, and of Rudolf Carnap in his later work. A key theoretical notion, distinguishable from, but intelligibly related to, information is the positive or negative relevance of a proposition or sentence to another. It is explicated in the probabilistic way familiar from Carnap and traditional in the philosophies of science and rational action. This makes it a representation of local epistemic contextchange potential that is directional in a precisely specifiable sense and naturally related to utterers ’ instrumental intentions. Relevance so defined is proposed as an explicans for Oswald Ducrot’s insightful ‘valeur argumentative’. In view of possible confusion among some students of language, it is contrasted with a more recent and idiosyncratic pretender to the appellation, due to Dan Sperber and Deirdre Wilson. The latter proposal turns out, at best, to paraphrase H.P. Grice’s nondirectional concepts of ‘informativeness ’ and ‘perspicuity’. (More informative designations are suggested for it, and for the eponymous linguistic doctrine emanating from parts of CNRS Paris and of UC London.)
Solving TimeDependent Problems: A DecisionTheoretic Approach to Planning in Dynamic Environments
, 1991
"... Controlling a robot involves making decisions that modify its behavior. Making good decisions may require timeconsuming computation. Changes in the environment over time affect when this computation can be done (e.g., after obtaining the necessary information) , and when a result is useful (e.g., b ..."
Abstract

Cited by 15 (1 self)
 Add to MetaCart
Controlling a robot involves making decisions that modify its behavior. Making good decisions may require timeconsuming computation. Changes in the environment over time affect when this computation can be done (e.g., after obtaining the necessary information) , and when a result is useful (e.g., before some event occurs). This sensitivity to when computation is performed and when decisions are made is what makes these problems "timedependent." A controller with more than one decision to make must trade off computation time, based on the expected effect on the system's behavior. We call the resulting metalevel scheduling problem a "deliberationscheduling" problem. We have
Significance Tests, Belief Calculi, and Burden of Proof in Legal and Scientific Discourse
 of Proof in Legal and Scientific Discourse. Laptec’03, Frontiers in Artificial Intell.and its Applications
, 2003
"... We review the definition of the Full Bayesian Significance Test (FBST), and summarize its main statistical and epistemological characteristics. ..."
Abstract

Cited by 7 (7 self)
 Add to MetaCart
We review the definition of the Full Bayesian Significance Test (FBST), and summarize its main statistical and epistemological characteristics.
PAGODA: A Model for Autonomous Learning in Probabilistic Domains
, 1992
"... as a testbed for designing intelligent agents. The system consists of an overall agent architecture and five components within the architecture. The five components are: 1. GoalDirected Learning (GDL), a decisiontheoretic method for selecting learning goals. 2. Probabilistic Bias Evaluation (PBE) ..."
Abstract

Cited by 6 (2 self)
 Add to MetaCart
as a testbed for designing intelligent agents. The system consists of an overall agent architecture and five components within the architecture. The five components are: 1. GoalDirected Learning (GDL), a decisiontheoretic method for selecting learning goals. 2. Probabilistic Bias Evaluation (PBE), a technique for using probabilistic background knowledge to select learning biases for the learning goals. 3. Uniquely Predictive Theories (UPTs) and Probability Computation using Independence (PCI), a probabilistic representation and Bayesian inference method for the agent's theories. 4. A probabilistic learning component, consisting of a heuristic search algorithm and a Bayesian method for evaluating proposed theories. 5. A decisiontheoretic probabilistic planner, which searches through the probability space defined by the agent's current theory to select the best action. PAGODA is given as input an initial planning goal (its ove
Cognitive Constructivism, Eigen–Solutions, and Sharp Statistical Hypotheses
 Third Conference on the Foundations of Information Science. FIS2005
, 2005
"... Abstract: In this paper epistemological, ontological and sociological questions concerning the statistical significance of sharp hypotheses in scientific research are investigated within the framework provided by the Cognitive Constructivism and the FBSTFull Bayesian Significance Test. The construc ..."
Abstract

Cited by 4 (4 self)
 Add to MetaCart
Abstract: In this paper epistemological, ontological and sociological questions concerning the statistical significance of sharp hypotheses in scientific research are investigated within the framework provided by the Cognitive Constructivism and the FBSTFull Bayesian Significance Test. The constructivist framework is contrasted with Decision Theory and Falsificationism, the traditional epistemological settings for orthodox Bayesian and frequentist statistics.
Probability, Confirmation, and the Conjunction Fallacy
, 2007
"... Abstract. The conjunction fallacy has been a key topic in debates on the rationality of human reasoning and its limitations. Despite extensive inquiry, however, the attempt of providing a satisfactory account of the phenomenon has proven challenging. Here, we elaborate the suggestion (first discusse ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
Abstract. The conjunction fallacy has been a key topic in debates on the rationality of human reasoning and its limitations. Despite extensive inquiry, however, the attempt of providing a satisfactory account of the phenomenon has proven challenging. Here, we elaborate the suggestion (first discussed by Sides et al., 2001) that in standard conjunction problems the fallacious probability judgments experimentally observed are typically guided by sound assessments of confirmation relations, meant in terms of contemporary Bayesian confirmation theory. Our main formal result is a confirmationtheoretic account of the conjunction fallacy which is proven robust (i.e., not depending on various alternative ways of measuring degrees of confirmation). The proposed analysis is shown distinct from contentions that the conjunction effect is in fact not a fallacy and is compared with major competing explanations of the phenomenon, including earlier references to a confirmationtheoretic account.
unknown title
"... In probability theory, the random variables Y1,...,YN are said to be exchangeable (or permutable or symmetric) if their joint distribution F(y1,...,yN) is symmetric; that is, if F is invariant under permutation of its arguments, so that F(z1,...,zN) = F(y1,...,yN) whenever z1,...,zN is a permutatio ..."
Abstract
 Add to MetaCart
In probability theory, the random variables Y1,...,YN are said to be exchangeable (or permutable or symmetric) if their joint distribution F(y1,...,yN) is symmetric; that is, if F is invariant under permutation of its arguments, so that F(z1,...,zN) = F(y1,...,yN) whenever z1,...,zN is a permutation of y1,...,yN. There is a related epidemiologic usage which is described in the article on confounding. In many ways, sequences of exchangeable random variables play a role in subjective Bayesian theory analogous to that played by independent identically distributed (iid) sequences in classical frequentist theory. In particular, the assumption that a sequence of random variables is exchangeable allows the development of inductive statistical procedures for inference from observed to unobserved members of the sequence [1–3, 5, 6, 9]. Exchangeable random variables are identically distributed, and iid variables are exchangeable. Now suppose that Y1,...,YN are iid given an unknown parameter θ that indexes their joint distribution (see Identifiability). Such variables will not be unconditionally independent when θ is a random variable, but will be exchangeable. Consider, for example, the case in which Y1,...,YN have a joint density. The unconditional density of Y1,...,YN will be f(y1,...,yN) = f(y1,...,yNθ) dF(θ)
Imprecise Probabilities
, 2000
"... equency of that colour. There are imprecise probability models for the learning process which have these properties, which treat all colours symmetrically, and which are coherent. Imprecise probability models are needed in many applications of probabilistic and statistical reasoning. They have been ..."
Abstract
 Add to MetaCart
equency of that colour. There are imprecise probability models for the learning process which have these properties, which treat all colours symmetrically, and which are coherent. Imprecise probability models are needed in many applications of probabilistic and statistical reasoning. They have been used in the following kinds of problems: when there is little information on which to evaluate a probability, as in Walley [35, 37, 38] to model nonspecic information, e.g., knowing the proportions of black, white and coloured balls in an urn gives only upper and lower bounds for the chance of drawing a red ball (see Dempster [6], Klir and Folger [23] and Shafer [30]) to model the uncertainty produced by vague statements such as \it will probably rain" or \there is a good chance that it will be mainly ne" (Walley [36], Zadeh [46]) in robust Bayesian inference, to model uncertainty about a prior distribution (see Ber