Results 1  10
of
40
A Theory of Program Size Formally Identical to Information Theory
, 1975
"... A new definition of programsize complexity is made. H(A;B=C;D) is defined to be the size in bits of the shortest selfdelimiting program for calculating strings A and B if one is given a minimalsize selfdelimiting program for calculating strings C and D. This differs from previous definitions: (1) ..."
Abstract

Cited by 402 (17 self)
 Add to MetaCart
A new definition of programsize complexity is made. H(A;B=C;D) is defined to be the size in bits of the shortest selfdelimiting program for calculating strings A and B if one is given a minimalsize selfdelimiting program for calculating strings C and D. This differs from previous definitions: (1) programs are required to be selfdelimiting, i.e. no program is a prefix of another, and (2) instead of being given C and D directly, one is given a program for calculating them that is minimal in size. Unlike previous definitions, this one has precisely the formal 2 G. J. Chaitin properties of the entropy concept of information theory. For example, H(A;B) = H(A) + H(B=A) + O(1). Also, if a program of length k is assigned measure 2 \Gammak , then H(A) = \Gamma log 2 (the probability that the standard universal computer will calculate A) +O(1). Key Words and Phrases: computational complexity, entropy, information theory, instantaneous code, Kraft inequality, minimal program, probab...
On the Length of Programs for Computing Finite Binary Sequences
 Journal of the ACM
, 1966
"... The use of Turing machines for calculating finite binary sequences is studied from the point of view of information theory and the theory of recursive functions. Various results are obtained concerning the number of instructions in programs. A modified form of Turing machine is studied from the same ..."
Abstract

Cited by 299 (8 self)
 Add to MetaCart
The use of Turing machines for calculating finite binary sequences is studied from the point of view of information theory and the theory of recursive functions. Various results are obtained concerning the number of instructions in programs. A modified form of Turing machine is studied from the same point of view. An application to the problem of defining a patternless sequence is proposed in terms of the concepts here 2 G. J. Chaitin developed. Introduction In this paper the Turing machine is regarded as a general purpose computer and some practical questions are asked about programming it. Given an arbitrary finite binary sequence, what is the length of the shortest program for calculating it? What are the properties of those binary sequences of a given length which require the longest programs? Do most of the binary sequences of a given length require programs of about the same length? The questions posed above are answered in Part 1. In the course of answering them, the logical ...
A Comparison of Known Codes, Random Codes, and the Best Codes
 IEEE Trans. Inform. Theory
, 1998
"... This paper calculates new bounds on the size of the performance gap between random codes and the best possible codes. The first result shows that, for large block sizes, the ratio of the error probability of a random code to the spherepacking lower bound on the error probability of every code on th ..."
Abstract

Cited by 17 (0 self)
 Add to MetaCart
This paper calculates new bounds on the size of the performance gap between random codes and the best possible codes. The first result shows that, for large block sizes, the ratio of the error probability of a random code to the spherepacking lower bound on the error probability of every code on the binary symmetric channel (BSC) is small for a wide range of useful crossover probabilities. Thus even far from capacity, random codes have nearly the same error performance as the best possible long codes. The paper also demonstrates that a small reduction k 0 ~ k in the number of information bits conveyed by a codeword will make the error performance of an (n; ~ k) random code better than the spherepacking lower bound for an (n; k) code as long as the channel crossover probability is somewhat greater than a critical probability. For example, the spherepacking lower bound for a long (n; k), rate 1=2, code will exceed the error probability of an (n; ~ k) random code if k0 ~ k?10 and the crossover probability is between 0:035 and 0:11 = H 01 (1=2). Analogous results are presented for the binary erasure channel (BEC) and the additive white Gaussian noise (AWGN) channel. The paper also presents substantial numerical evaluation of the performance of random codes and existing standard lower bounds for the BEC, BSC, and the AWGN channel. These last results provide a useful standard against which to measure many popular codes including turbo codes, e.g., there exist turbo codes that perform within 0.6 dB of the bounds over a wide range of block lengths.
Optimal query forgery for private information retrieval
 IEEE Trans. Inform. Theory
, 2010
"... Abstract—We present a mathematical formulation for the optimization of query forgery for private information retrieval, in the sense that the privacy risk is minimized for a given traffic and processing overhead. The privacy risk is measured as an informationtheoretic divergence between the user’ ..."
Abstract

Cited by 10 (4 self)
 Add to MetaCart
(Show Context)
Abstract—We present a mathematical formulation for the optimization of query forgery for private information retrieval, in the sense that the privacy risk is minimized for a given traffic and processing overhead. The privacy risk is measured as an informationtheoretic divergence between the user’s query distribution and the population’s, which includes the entropy of the user’s distribution as a special case. We carefully justify and interpret our privacy criterion from diverse perspectives. Our formulation poses a mathematically tractable problem that bears substantial resemblance with ratedistortion theory. Index Terms—Entropy, Kullback–Leibler divergence, privacy risk, private information retrieval, query forgery.
An estimation of distribution algorithm based on maximum entropy
 GECCO 2004: Genetic and Evolutionary Computation Conference
, 2004
"... Abstract. Estimation of distribution algorithms (EDA) are similar to genetic algorithms except that they replace crossover and mutation with sampling from an estimated probability distribution. We develop a framework for estimation of distribution algorithms based on the principle of maximum entropy ..."
Abstract

Cited by 9 (0 self)
 Add to MetaCart
Abstract. Estimation of distribution algorithms (EDA) are similar to genetic algorithms except that they replace crossover and mutation with sampling from an estimated probability distribution. We develop a framework for estimation of distribution algorithms based on the principle of maximum entropy and the conservation of schema frequencies. An algorithm of this type gives better performance than a standard genetic algorithm (GA) on a number of standard test problems involving deception and epistasis (i.e. Trap and NK). 1
K.: Generalized oblivious transfer protocols based on noisy channels
 In: Proc. Workshop MMM ACNS 2001. LNCS
, 2001
"... Abstract. The main cryptographic primitives (Bit Commitment (BC) and Oblivious Transfer (OT) protocols) based on noisy channels have been considered in [1] for asymptotic case. Nonasymptotic behavior of BC protocol has been demonstrated in [2]. The current paper provides stricter asymptotic conditi ..."
Abstract

Cited by 6 (1 self)
 Add to MetaCart
(Show Context)
Abstract. The main cryptographic primitives (Bit Commitment (BC) and Oblivious Transfer (OT) protocols) based on noisy channels have been considered in [1] for asymptotic case. Nonasymptotic behavior of BC protocol has been demonstrated in [2]. The current paper provides stricter asymptotic conditions on Binary Symmetric Channel (BSC) to be feasible OT protocol proposed in [1]. We also generalize this protocol using different encoding and decoding methods that require to regain formulas for Renyi entropy. Nonasymptotic case (finite length of blocks transmitted between parties) is also presented. Some examples are given to demonstrate that these protocols are in fact reliable and informationtheoretically secure. We also discuss the problem – how to extend ( 2)OT protocol to ( L)OT protocol and how to arrange BSC 1 1 connecting parties. Both BC and OT protocols can be used as components of more complex and more important for practice protocols like “Digital cash”, “Secure election ” or “Distance bounding”. 1
A generalized ‘useful’ information measure and coding theorems
 Soochow J. Math
, 1997
"... ..."
(Show Context)
Probability Distributions and Maximum Entropy”, http://www.math.uconn.edu/ ∼kconrad/blurbs/entropy.pdf
"... Historically, the first method of assigning probabilities to the outcomes of a random event was this: when there is no reason to do otherwise, assign all outcomes equal probability. This is called the principle of insufficient reason, or principle of indifference. It corresponds to a decision to use ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
Historically, the first method of assigning probabilities to the outcomes of a random event was this: when there is no reason to do otherwise, assign all outcomes equal probability. This is called the principle of insufficient reason, or principle of indifference. It corresponds to a decision to use a uniform probability distribution.
On concepts of performance parameters for channels
 IEEE Trans. Inf. Theory
"... Abstract. Among the mostly investigated parameters for noisy channels are code size, error probability in decoding, block length; rate, capacity, reliability function; delay, complexity of coding. There are several statements about connections between these quantities. They carry names like “coding ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
(Show Context)
Abstract. Among the mostly investigated parameters for noisy channels are code size, error probability in decoding, block length; rate, capacity, reliability function; delay, complexity of coding. There are several statements about connections between these quantities. They carry names like “coding theorem”, “converse theorem ” (weak, strong,...), “direct theorem”, “capacity theorem”, “lower bound”, “upper bound”, etc. There are analogous notions for source coding.
Highly Informative Priors
, 1985
"... INTRODUCTION The statistical problems envisaged in our pedagogy are almost always ones in which we acquire new data D that give evidence concerning some hypotheses H; H 0 ; : : : (this includes parameter estimation, since H might be the statement that a parameter lies in a certain interval); and w ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
INTRODUCTION The statistical problems envisaged in our pedagogy are almost always ones in which we acquire new data D that give evidence concerning some hypotheses H; H 0 ; : : : (this includes parameter estimation, since H might be the statement that a parameter lies in a certain interval); and we make inferences about them solely from the data. Indeed, Fisher's maxim, "Let the data speak for themselves" seems to imply that it would be wrong  a violation of "scientific objectivity"  to allow ourselves to be influenced by other considerations such as prior knowledge about H . Yet the very act of choosing a model (i.e. a sampling distribution conditional on H) is a means of expressing some kind of prior knowledge about the existence and nature of H , and its observable effects. This was noted by John Tukey (1978), who observed that sampling theory is in the curious