Results 1  10
of
10
NONCOMPUTABLE CONDITIONAL DISTRIBUTIONS
"... Abstract. We study the computability of conditional probability, a fundamental notion in probability theory and Bayesian statistics. In the elementary discrete setting, a ratio of probabilities defines conditional probability. In more general settings, conditional probability is defined axiomaticall ..."
Abstract

Cited by 7 (3 self)
 Add to MetaCart
Abstract. We study the computability of conditional probability, a fundamental notion in probability theory and Bayesian statistics. In the elementary discrete setting, a ratio of probabilities defines conditional probability. In more general settings, conditional probability is defined axiomatically, and the search for more constructive definitions is the subject of a rich literature in probability theory and statistics. However, we show that in general one cannot compute conditional probabilities. Specifically, we construct a pair of computable random variables (X, Y) in the unit interval whose conditional distribution P[YX] encodes the halting problem. Nevertheless, probabilistic inference has proven remarkably successful in practice, even in infinitedimensional continuous settings. We prove several results giving general conditions under which conditional distributions are computable. In the discrete or dominated setting, under suitable computability hypotheses, conditional distributions are computable. Likewise, conditioning is a computable operation in the presence of certain additional structure, such as independent absolutely continuous noise.
ON THE COMPUTABILITY OF CONDITIONAL PROBABILITY
"... Abstract. We study the problem of computing conditional probabilities, a fundamental operation in statistics and machine learning. In the elementary discrete setting, conditional probability is defined axiomatically and the search for more constructive definitions is the subject of a rich literature ..."
Abstract

Cited by 3 (3 self)
 Add to MetaCart
Abstract. We study the problem of computing conditional probabilities, a fundamental operation in statistics and machine learning. In the elementary discrete setting, conditional probability is defined axiomatically and the search for more constructive definitions is the subject of a rich literature in probability theory and statistics. In the discrete or dominated setting, under suitable computability hypotheses, conditional probabilities are computable. However, we show that in general one cannot compute conditional probabilities. We do this by constructing a pair of computable random variables in the unit interval whose conditional distribution encodes the halting problem at almost every point. We show that this result is tight, in the sense that given an oracle for the halting problem, one can compute this conditional distribution. On the other hand, we show that conditioning in abstract settings is computable in the presence of certain additional structure, such as independent absolutely continuous noise. 1.
Algorithmic tests and randomness with respect to a class of measures
, 2011
"... This paper offers some new results on randomness with respect to classes of measures, along with a didactical exposition of their context based on results that appeared elsewhere. We start with the reformulation of the MartinLöf definition of randomness (with respect to computable measures) in term ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
This paper offers some new results on randomness with respect to classes of measures, along with a didactical exposition of their context based on results that appeared elsewhere. We start with the reformulation of the MartinLöf definition of randomness (with respect to computable measures) in terms of randomness deficiency functions. A formula that expresses the randomness deficiency in terms of prefix complexity is given (in two forms). Some approaches that go in another direction (from deficiency to complexity) are considered. The notion of Bernoulli randomness (independent coin tosses for an asymmetric coin with some probability p of head) is defined. It is shown that a sequence is Bernoulli if it is random with respect to some Bernoulli
SCHNORR RANDOMNESS AND THE LEBESGUE DIFFERENTIATION THEOREM
, 2012
"... We exhibit a close correspondence between L1computable functions and Schnorr tests. Using this correspondence, we prove that a point x ∈ [0,1] d is Schnorr random if and only if the Lebesgue Differentiation Theorem holds at x for all L1computable functions f ∈ L1([0,1] d). ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
We exhibit a close correspondence between L1computable functions and Schnorr tests. Using this correspondence, we prove that a point x ∈ [0,1] d is Schnorr random if and only if the Lebesgue Differentiation Theorem holds at x for all L1computable functions f ∈ L1([0,1] d).
Mathieu institution
"... Abstract. A theorem of Kučera states that given a MartinLöf random infinite binary sequence ω and an effectively open set A of measure less than 1, some tail of ω is not in A. We show that this result can be seen as an effective version of Birkhoff’s ergodic theorem (in a special case). We prove se ..."
Abstract
 Add to MetaCart
Abstract. A theorem of Kučera states that given a MartinLöf random infinite binary sequence ω and an effectively open set A of measure less than 1, some tail of ω is not in A. We show that this result can be seen as an effective version of Birkhoff’s ergodic theorem (in a special case). We prove several results in the same spirit and generalize them via an effective ergodic theorem for bijective ergodic maps. 1
A constructive law of large numbers with application to countable Markov chains
, 2010
"... Abstract Let X1, X2,... be a sequence of identically distributed, pairwise independent random variables with distribution P. Let the expected value be µ < ∞. Let S n = ∑ n i=1 Xi. It is wellknown that S n/n converges to µ almost surely. We show that this convergence is effective in (P,µ). In parti ..."
Abstract
 Add to MetaCart
Abstract Let X1, X2,... be a sequence of identically distributed, pairwise independent random variables with distribution P. Let the expected value be µ < ∞. Let S n = ∑ n i=1 Xi. It is wellknown that S n/n converges to µ almost surely. We show that this convergence is effective in (P,µ). In particular, if P,µ are computable then the convergence is effective. On the other hand, if the convergence is effective in P then µ is computable from P. The effectiveness of convergence is detached in the sense that nothing can be inferred about the speed of convergence in the law of large numbers from the speed of convergence in computing P and µ. This theorem can be used to show an effective renewal theorem, which then can be used to prove an effective ergodic theorem for countable Markov chains. The last result is a special case of effective ergodic theorems proven by AvigadGerhardyTowsner and GalatoloHoyrupRojas, but we hope that the direct constructivization of the probabilitytheory proofs is still useful. 1
Pathak’s “computational ” Lebesgue differentiation theorem in context
, 2009
"... These are working notes on placing Pathak’s [5] computabilityflavored version of the Lebesgue differentiation theorem into the framework developed by Hoyrup and Rojas [3, 1, 2] for working with algorithmic randomness in “computable metric spaces. ” I wrote this up for my own edification, as I am on ..."
Abstract
 Add to MetaCart
These are working notes on placing Pathak’s [5] computabilityflavored version of the Lebesgue differentiation theorem into the framework developed by Hoyrup and Rojas [3, 1, 2] for working with algorithmic randomness in “computable metric spaces. ” I wrote this up for my own edification, as I am only just becoming familiar with algorithmic randomness in general.
DOI: 10.1016/j.ic.2011.10.006
, 2011
"... Author manuscript, published in "Information and Computation (2011)" ..."