Results 1  10
of
13
Prior Probabilities
 IEEE Transactions on Systems Science and Cybernetics
, 1968
"... e case of location and scale parameters, rate constants, and in Bernoulli trials with unknown probability of success. In realistic problems, both the transformation group analysis and the principle of maximum entropy are needed to determine the prior. The distributions thus found are uniquely determ ..."
Abstract

Cited by 166 (3 self)
 Add to MetaCart
e case of location and scale parameters, rate constants, and in Bernoulli trials with unknown probability of success. In realistic problems, both the transformation group analysis and the principle of maximum entropy are needed to determine the prior. The distributions thus found are uniquely determined by the prior information, independently of the choice of parameters. In a certain class of problems, therefore, the prior distributions may now be claimed to be fully as "objective" as the sampling distributions. I. Background of the problem Since the time of Laplace, applications of probability theory have been hampered by difficulties in the treatment of prior information. In realistic problems of decision or inference, we often have prior information which is highly relevant to the question being asked; to fail to take it into account is to commit the most obvious inconsistency of reasoning and may lead to absurd or dangerously misleading results. As an extreme examp
Philosophy of Statistics
 Philosophy of Science: An Encyclopedia
, 2006
"... Error statistics, as we are using that term, has a dual dimension involving philosophy and methodology. It refers to a standpoint regarding both: 1. a cluster of statistical tools, their interpretation and justification, 2. a general philosophy of science, and the roles probability plays in inductiv ..."
Abstract

Cited by 6 (4 self)
 Add to MetaCart
Error statistics, as we are using that term, has a dual dimension involving philosophy and methodology. It refers to a standpoint regarding both: 1. a cluster of statistical tools, their interpretation and justification, 2. a general philosophy of science, and the roles probability plays in inductive inference. To adequately appraise the error statistical approach, and compare it to other philosophies of statistics, requires understanding the complex interconnections between the methodological and philosophical dimensions in (1) and (2) respectively. To make this entry useful while keeping to a manageable length, we restrict our main focus to (1) the error statistical philosophy. We will however aim to bring out enough of the interplay between the philosophical, methodological, and statistical issues, to elucidate longstanding conceptual, technical, and epistemological debates surrounding both these dimensions. Even with this restriction, we are identifying a huge territory marked by generations of recurring controversy about how to specify and interpret statistical methods. Understandably, standard explications
Basic Elements and Problems of Probability Theory
, 1999
"... After a brief review of ontic and epistemic descriptions, and of subjective, logical and statistical interpretations of probability, we summarize the traditional axiomatization of calculus of probability in terms of Boolean algebras and its settheoretical realization in terms of Kolmogorov probabil ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
After a brief review of ontic and epistemic descriptions, and of subjective, logical and statistical interpretations of probability, we summarize the traditional axiomatization of calculus of probability in terms of Boolean algebras and its settheoretical realization in terms of Kolmogorov probability spaces. Since the axioms of mathematical probability theory say nothing about the conceptual meaning of “randomness” one considers probability as property of the generating conditions of a process so that one can relate randomness with predictability (or retrodictability). In the measuretheoretical codification of stochastic processes genuine chance processes can be defined rigorously as socalled regular processes which do not allow a longterm prediction. We stress that stochastic processes are equivalence classes of individual point functions so that they do not refer to individual processes but only to an ensemble of statistically equivalent individual processes. Less popular but conceptually more important than statistical descriptions are individual descriptions which refer to individual chaotic processes. First, we review the individual description based on the generalized harmonic analysis by Norbert Wiener. It allows the definition of individual purely chaotic processes which can be interpreted as trajectories of regular statistical stochastic processes. Another individual description refers to algorithmic procedures which connect the intrinsic randomness of a finite sequence with the complexity of the shortest program necessary to produce the sequence. Finally, we ask why there can be laws of chance. We argue that random events fulfill the laws of chance if and only if they can be reduced to (possibly hidden) deterministic events. This mathematical result may elucidate the fact that not all nonpredictable events can be grasped by the methods of mathematical probability theory.
Experimental practice and an error statistical account of evidence
 Philosophy of Science
, 2000
"... you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, noncommercial use. Please contact the publisher regarding any further use of this work. Publisher contact inform ..."
Abstract

Cited by 5 (4 self)
 Add to MetaCart
you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, noncommercial use. Please contact the publisher regarding any further use of this work. Publisher contact information may be obtained at
The Applications of Genetic Algorithms in Cryptanalysis
, 1996
"... This thesis describes a method of deciphering messages encrypted with rotor machines utilising a Genetic Algorithm to search the keyspace. A fitness measure based on the phi test for non randomness of text is described and the results show that an unknown three rotor machine can generally be cryptan ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
This thesis describes a method of deciphering messages encrypted with rotor machines utilising a Genetic Algorithm to search the keyspace. A fitness measure based on the phi test for non randomness of text is described and the results show that an unknown three rotor machine can generally be cryptanalysed with about 4000 letters of ciphertext. The results are compared to those given using a previously published technique and found to be superior. Acknowledgements I would like to thank my supervisors, Vic RaywardSmith and Geoff McKeown, for their help and encouragement. Contents 1 Introduction 8 2 Statistical Inference 10 2.1 Introduction : : : : : : : : : : : : : : : : : : : : : : : : : : : : 10 2.2 Uncertainty : : : : : : : : : : : : : : : : : : : : : : : : : : : : 11 2.2.1 Rules of Probability : : : : : : : : : : : : : : : : : : : 12 2.2.2 Frequency Probability : : : : : : : : : : : : : : : : : : 15 2.2.3 Subjective Probability : : : : : : : : : : : : : : : : : : 15 2.3 Modelling...
Introductory Remarks on Metastatistics for The Practically Minded Non–Bayesian Regression Runner Contents
, 2008
"... ..."
How to discount doublecounting when it counts: some clarifications
 British Journal of Philosophy of Science
, 2008
"... The issues of doublecounting, useconstructing, and selection effects have long been the subject of debate in the philosophical as well as statistical literature. I have argued that it is the severity, stringency, or probativeness of the test—or lack of it—that should determine if a doubleuse of d ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
The issues of doublecounting, useconstructing, and selection effects have long been the subject of debate in the philosophical as well as statistical literature. I have argued that it is the severity, stringency, or probativeness of the test—or lack of it—that should determine if a doubleuse of data is admissible. Hitchcock and Sober ([2004]) question whether this ‘severity criterion ’ can perform its intended job. I argue that their criticisms stem from a flawed interpretation of the severity criterion. Taking their criticism as a springboard, I elucidate some of the central examples that have long been controversial, and clarify how the severity criterion is properly applied to them.
E1 Reconceiving Machine Learning E2 Aims and Background
"... Beware of the man of one method or one instrument, either experimental or theoretical. He tends to become method oriented rather than problem oriented. The methodoriented man is shackled: the problemoriented man is at least reaching freely toward what is most important. 52 Context Machine Learning ..."
Abstract
 Add to MetaCart
Beware of the man of one method or one instrument, either experimental or theoretical. He tends to become method oriented rather than problem oriented. The methodoriented man is shackled: the problemoriented man is at least reaching freely toward what is most important. 52 Context Machine Learning is a subdiscipline of Information and Communication Technology (ICT) that develops the technologies for machines to recognise and learn patterns in data. It is distinct from, although related to, statistics. It can be differentiated by its focus on creating technology rather than the humancentred analysis of data. It is the science and engineering behind Data Mining. Machine learning is pervasive: it plays a key role in all stages of the scientific process and across diverse fields including bioinformatics, engineering and finance. It is widely accepted that ICT plays an enabling role across almost all technological disciplines. Analogously, Machine Learning plays an enabling role across most parts of ICT, from embedded to enterprise systems, and consequently is a crucial enabler of the Digital Economy 16. Vast quantities of data are now routinely collected and stored because it is affordable to do so. Machine learning makes sense of this data flood. The Problem The massive reduction in the cost of collecting, storing, transporting and processing
CLASSICAL STATIC SYSTEM RELIABILITY AND ADJUSTED STATIC SYSTEM RELIABILITY WITH PRIOR & POSTERIOR VARIATIONS
"... This paper considers an important concept which suggested to take stock of the over estimation in reliability characteristics or under estimation of hazard rate. Using this concept, the study considers the analysis of the reliability characteristics of an exponential lifetime model when prior variat ..."
Abstract
 Add to MetaCart
This paper considers an important concept which suggested to take stock of the over estimation in reliability characteristics or under estimation of hazard rate. Using this concept, the study considers the analysis of the reliability characteristics of an exponential lifetime model when prior variations in its parameters are suspected. Key Words: Robustness, adjustment factor, updated and predictive basic distributions 1.