Results 1  10
of
53
A Theory of Program Size Formally Identical to Information Theory
, 1975
"... A new definition of programsize complexity is made. H(A;B=C;D) is defined to be the size in bits of the shortest selfdelimiting program for calculating strings A and B if one is given a minimalsize selfdelimiting program for calculating strings C and D. This differs from previous definitions: (1) ..."
Abstract

Cited by 405 (17 self)
 Add to MetaCart
A new definition of programsize complexity is made. H(A;B=C;D) is defined to be the size in bits of the shortest selfdelimiting program for calculating strings A and B if one is given a minimalsize selfdelimiting program for calculating strings C and D. This differs from previous definitions: (1) programs are required to be selfdelimiting, i.e. no program is a prefix of another, and (2) instead of being given C and D directly, one is given a program for calculating them that is minimal in size. Unlike previous definitions, this one has precisely the formal 2 G. J. Chaitin properties of the entropy concept of information theory. For example, H(A;B) = H(A) + H(B=A) + O(1). Also, if a program of length k is assigned measure 2 \Gammak , then H(A) = \Gamma log 2 (the probability that the standard universal computer will calculate A) +O(1). Key Words and Phrases: computational complexity, entropy, information theory, instantaneous code, Kraft inequality, minimal program, probab...
Making sense of randomness: Implicit encoding as a basis for judgment
 Psychological Review
, 1997
"... People attempting to generate random sequences usually produce more alternations than expected by chance. They also judge overalternating sequences as maximally random. In this article, the authors review findings, implications, and explanatory mechanisms concerning subjective randomness. The author ..."
Abstract

Cited by 53 (1 self)
 Add to MetaCart
(Show Context)
People attempting to generate random sequences usually produce more alternations than expected by chance. They also judge overalternating sequences as maximally random. In this article, the authors review findings, implications, and explanatory mechanisms concerning subjective randomness. The authors next present the general approach of the mathematical theory of complexity, which identifies the length of the shortest program for reproducing a sequence with its degree of randomness. They describe three experiments, based on mean group responses, indicating that the perceived randomness of a sequence is better predicted by various measures of its encoding difficulty than by its objective randomness. These results seem to imply that in accordance with the complexity view, judging the extent of a sequence's randomness is based on an attempt to mentally encode it. The experience of randomness may result when this attempt fails. Judging a situation as more or less random is often the key to important cognitions and behaviors. Perceiving a situation as nonchance calls for explanations, and it marks the onset of inductive inference (Lopes, 1982). Lawful environments encourage a coping orientation. One may try to control a situation
System Identification, Approximation and Complexity
 International Journal of General Systems
, 1977
"... This paper is concerned with establishing broadlybased systemtheoretic foundations and practical techniques for the problem of system identification that are rigorous, intuitively clear and conceptually powerful. A general formulation is first given in which two order relations are postulated on a ..."
Abstract

Cited by 36 (22 self)
 Add to MetaCart
(Show Context)
This paper is concerned with establishing broadlybased systemtheoretic foundations and practical techniques for the problem of system identification that are rigorous, intuitively clear and conceptually powerful. A general formulation is first given in which two order relations are postulated on a class of models: a constant one of complexity; and a variable one of approximation induced by an observed behaviour. An admissible model is such that any less complex model is a worse approximation. The general problem of identification is that of finding the admissible subspace of models induced by a given behaviour. It is proved under very general assumptions that, if deterministic models are required then nearly all behaviours require models of nearly maximum complexity. A general theory of approximation between models and behaviour is then developed based on subjective probability concepts and semantic information theory The role of structural constraints such as causality, locality, finite memory, etc., are then discussed as rules of the game. These concepts and results are applied to the specific problem or stochastic automaton, or grammar, inference. Computational results are given to demonstrate that the theory is complete and fully operational. Finally the formulation of identification proposed in this paper is analysed in terms of Klir’s epistemological hierarchy and both are discussed in terms of the rich philosophical literature on the acquisition of knowledge. 1
Errors and difficulties in understanding elementary statistical concepts
 International Journal of Mathematics, Education, Science and Technology
, 1994
"... This paper presents a survey of the reported research about students ' errors, difficulties and conceptions concerning elementary statistical concepts. Information related to the learning processes is essential to curricular design in this branch of mathematics. In particular, the identificatio ..."
Abstract

Cited by 33 (1 self)
 Add to MetaCart
(Show Context)
This paper presents a survey of the reported research about students ' errors, difficulties and conceptions concerning elementary statistical concepts. Information related to the learning processes is essential to curricular design in this branch of mathematics. In particular, the identification of errors and difficulties which students display is needed in order to organize statistical training programmes and to prepare didactical situations which allow the students to overcome their cognitive obstacles. This paper does not attempt to report on probability concepts, an area which has received much attention, but concentrates on other statistical concepts, which have received little attention hitherto.
Intuitions about sample size; The empirical law of large numbers
 Journal of Behavioral Decision Making
, 1997
"... According to Jacob Bernoulli, even the “stupidest man ” knows that the larger one’s sample of observations, the more confidence one can have in being close to the truth about the phenomenon observed. Twoandahalf centuries later, psychologists empirically tested people’s intuitions about sample s ..."
Abstract

Cited by 29 (5 self)
 Add to MetaCart
(Show Context)
According to Jacob Bernoulli, even the “stupidest man ” knows that the larger one’s sample of observations, the more confidence one can have in being close to the truth about the phenomenon observed. Twoandahalf centuries later, psychologists empirically tested people’s intuitions about sample size. One group of such studies found participants attentive to sample size; another found participants ignoring it. We suggest an explanation for a substantial part of these inconsistent findings. We propose the hypothesis that human intuition conforms to the “empirical law of large numbers ” and distinguish between two kinds of tasks—one that can be solved by this intuition (frequency distributions) and one for which it is not sufficient (sampling distributions). A review of the literature reveals that this distinction can explain a substantial part of the apparently inconsistent results. Key Words: sample size; law of large numbers; sampling distribution; frequency distribution. Jacob Bernoulli, who formulated the first version of the law of large numbers, asserted in a letter to Leibniz that “even the stupidest man knows by some instinct of nature per se and by no previous instruction ” that the greater the number of confirming observations, the surer the conjecture (Gigerenzer et al., 1989, p. 29). Twoandahalf centuries later, psychologists began to study whether people actually take into account information about sample size in judgements of various kinds. The results turned out to be contradictory: One group of studies seemed to confirm, a second to disconfirm the “instinct of nature ” assumed by Bernoulli. In this paper, we propose an explanation that accounts for a substantial part of the contradictory results reported in the literature.
IntervalValued Probabilities
, 1998
"... 0 =h 0 in the diagram. The sawtooth line reflects the fact that even when the principle of indifference can be applied, there may be arguments whose strength can be bounded no more precisely than by an adjacent pair of indifference arguments. Note that a=h in the diagram is bounded numerically on ..."
Abstract

Cited by 27 (1 self)
 Add to MetaCart
(Show Context)
0 =h 0 in the diagram. The sawtooth line reflects the fact that even when the principle of indifference can be applied, there may be arguments whose strength can be bounded no more precisely than by an adjacent pair of indifference arguments. Note that a=h in the diagram is bounded numerically only by 0.0 and the strength of a 00 =h 00 . Keynes' ideas were taken up by B. O. Koopman [14, 15, 16], who provided an axiomatization for Keynes' probability values. The axioms are qualitative, and reflect what Keynes said about probability judgment. (It should be remembered that for Keynes probability judgment was intended to be objective in the sense that logic is objective. Although different people may accept different premises, whether or not a conclusion follows logically from a given set of premises is objective. Though Ramsey [26] attacked this aspect of Keynes' theory, it can be argued
Training teachers to teach probability
 Journal of Statistical Education
, 2004
"... In this paper we analyse the reasons why teaching probability is difficult for mathematics teachers, we describe the contents needed in the didactical preparation of teachers to teach probability and we present examples of activities to carry out this training. Nowadays probability and statistics is ..."
Abstract

Cited by 17 (1 self)
 Add to MetaCart
(Show Context)
In this paper we analyse the reasons why teaching probability is difficult for mathematics teachers, we describe the contents needed in the didactical preparation of teachers to teach probability and we present examples of activities to carry out this training. Nowadays probability and statistics is part of the mathematics curricula for primary and secondary school in many countries. The reasons to include this teaching have been repeatedly highlighted over the past 20 years (e.g. Holmes, 1980; Hawkins et al., 1991; VereJones, 1995), and include the usefulness of statistics and probability for daily life, its instrumental role in other disciplines, the need for a basic stochastic knowledge in many professions and its role in developing a critical reasoning. However, teaching probability and statistics is not easy for mathematics teachers. Primary and secondary level mathematics teachers frequently lack specific preparation in statistics education. For example, in Spain, prospective secondary teachers with a major in Mathematics do not receive specific training in statistics education. The situation is even worse for primary teachers, most of whom have not had basic training in statistics and this could be extended to many countries. There can be little support from textbooks and curriculum documents prepared for primary and secondary teachers, because
Conjoint Probabilistic Subband Modeling
 MASSACHUSETTS INSTITUTE OF TECHNOLOGY
, 1997
"... A new approach to highorderconditional probability density estimation is developed, based on a partitioning of conditioning space via decision trees. The technique is applied to image compression, image restoration, and texture synthesis, and the results compared with those obtained by standard mi ..."
Abstract

Cited by 16 (0 self)
 Add to MetaCart
A new approach to highorderconditional probability density estimation is developed, based on a partitioning of conditioning space via decision trees. The technique is applied to image compression, image restoration, and texture synthesis, and the results compared with those obtained by standard mixture density and linear regression models. By applying the technique to subbanddomain processing, some evidence is provided to support the following statement: the appropriate tradeoff between spatial and spectral localization in linear preprocessing shifts towards greater spatial localization when subbands are processed in a way that exploits interdependence.
On modeling uncertainty with interval structures
 Computational Intelligence
, 1995
"... In this paper, we introduce the notion of interval structures in an attempt to establish a unified framework for representing uncertain information. Two views are suggested for the interpretation of an interval structure. A typical example using the compatibility view is the roughset model in which ..."
Abstract

Cited by 14 (8 self)
 Add to MetaCart
In this paper, we introduce the notion of interval structures in an attempt to establish a unified framework for representing uncertain information. Two views are suggested for the interpretation of an interval structure. A typical example using the compatibility view is the roughset model in which the lower and upper approximations form an interval structure. Incidence calculus adopts the allocation view in which an interval structure is defined by the tightest lower and upper incidence bounds. The relationship between interval structures and intervalbased numeric belief and plausibility functions is also examined. As an application of the proposed model, an algorithm is developed for computing the tightest incidence bounds.
The Maximum Entropy Approach and Probabilistic IR Models
 ACM TRANSACTIONS ON INFORMATION SYSTEMS
, 1998
"... The Principle of Maximum Entropy is discussed and two classic probabilistic models of information retrieval, the Binary Independence Model of Robertson and Sparck Jones and the Combination Match Model of Croft and Harper are derived using the maximum entropy approach. The assumptions on which the cl ..."
Abstract

Cited by 12 (0 self)
 Add to MetaCart
The Principle of Maximum Entropy is discussed and two classic probabilistic models of information retrieval, the Binary Independence Model of Robertson and Sparck Jones and the Combination Match Model of Croft and Harper are derived using the maximum entropy approach. The assumptions on which the classical models are based are not made. In their place, the probability distribution of maximum entropy consistent with a set of constraints is determined. It is argued that this subjectivist approach is more philosophically coherent than the frequentist conceptualization of probability that is often assumed as the basis of probabilistic modeling and that this philosophical stance has important practical consequences with respect to the realization of information retrieval research.