Results 1  10
of
35
Complexity distortion theory
, 2003
"... Complexity distortion theory (CDT) is a mathematical framework providing a unifying perspective on media representation. The key component of this theory is the substitution of the decoder in Shannon’s classical communication model with a universal Turing machine. Using this model, the mathematical ..."
Abstract

Cited by 29 (2 self)
 Add to MetaCart
(Show Context)
Complexity distortion theory (CDT) is a mathematical framework providing a unifying perspective on media representation. The key component of this theory is the substitution of the decoder in Shannon’s classical communication model with a universal Turing machine. Using this model, the mathematical framework for examining the efficiency of coding schemes is the algorithmic or Kolmogorov complexity. CDT extends this framework to include distortion by defining the complexity distortion function. We show that despite their different natures, CDT and rate distortion theory (RDT) predict asymptotically the same results, under stationary and ergodic assumptions. This closes the circle of representation models, from probabilistic models of information proposed by Shannon in information and rate distortion theories, to deterministic algorithmic models, proposed by Kolmogorov in Kolmogorov complexity theory and its extension to lossy source coding, CDT.
Algorithmic Complexity and Stochastic Properties of Finite Binary Sequences
, 1999
"... This paper is a survey of concepts and results related to simple Kolmogorov complexity, prefix complexity and resourcebounded complexity. We also consider a new type of complexity statistical complexity closely related to mathematical statistics. Unlike other discoverers of algorithmic complexit ..."
Abstract

Cited by 18 (0 self)
 Add to MetaCart
This paper is a survey of concepts and results related to simple Kolmogorov complexity, prefix complexity and resourcebounded complexity. We also consider a new type of complexity statistical complexity closely related to mathematical statistics. Unlike other discoverers of algorithmic complexity, A. N. Kolmogorov's leading motive was developing on its basis a mathematical theory more adequately substantiating applications of probability theory, mathematical statistics and information theory. Kolmogorov wanted to deduce properties of a random object from its complexity characteristics without use of the notion of probability. In the first part of this paper we present several results in this direction. Though the subsequent development of algorithmic complexity and randomness was different, algorithmic complexity has successful applications in a traditional probabilistic framework. In the second part of the paper we consider applications to the estimation of parameters and the definition of Bernoulli sequences. All considerations have finite combinatorial character. 1.
Some Notes On Rissanen's Stochastic Complexity
, 1996
"... A new version of stochastic complexity for a parametric statistical model is derived, based on a class of twopart codes. We show that choosing the quantization in the first step according to the Fisher information is optimal and we compare our approach to a recent result of Rissanen [10]. Applicati ..."
Abstract

Cited by 13 (2 self)
 Add to MetaCart
A new version of stochastic complexity for a parametric statistical model is derived, based on a class of twopart codes. We show that choosing the quantization in the first step according to the Fisher information is optimal and we compare our approach to a recent result of Rissanen [10]. Application to robust regression model selection is presented.
Generativity and Systematicity in Neural Network Combinatorial Learning
, 1993
"... This thesis addresses a set of problems faced by connectionist learning that have originated from the observation that connectionist cognitive models lack two fundamental properties of the mind: Generativity, stemming from the boundless cognitive competence one can exhibit, and systematicity, due to ..."
Abstract

Cited by 12 (0 self)
 Add to MetaCart
This thesis addresses a set of problems faced by connectionist learning that have originated from the observation that connectionist cognitive models lack two fundamental properties of the mind: Generativity, stemming from the boundless cognitive competence one can exhibit, and systematicity, due to the existence of symmetries within them. Such properties have seldom been seen in neural networks models, which have typically suffered from problems of inadequate generalization, as examplified both by small number of generalizations relative to training set sizes and heavy interference between newly learned items and previously learned information. Symbolic theories, arguing that mental representations have syntactic and semantic structure built from structured combinations of symbolic constituents, can in principle account for these properties (both arise from the sensitivity of structured semantic content with a generative and systematic syntax). This thesis studies the question of whe...
Minimum Expected Length of FixedtoVariable Lossless Compression of Memoryless Sources
"... Abstract—Conventional wisdom states that the minimum expected length for fixedtovariable length encoding of an nblock memoryless source with entropy H grows as nH+O(1). However, this performance is obtained under the constraint that the code assigned to the whole nblock is a prefix code. Droppin ..."
Abstract

Cited by 12 (4 self)
 Add to MetaCart
(Show Context)
Abstract—Conventional wisdom states that the minimum expected length for fixedtovariable length encoding of an nblock memoryless source with entropy H grows as nH+O(1). However, this performance is obtained under the constraint that the code assigned to the whole nblock is a prefix code. Dropping this unnecessary constraint we show that the minimum expected length grows as nH − 1 log n + O(1) 2 unless the source is equiprobable. I.
A Quick Glance at Quantum Cryptography
, 1998
"... The recent application of the principles of quantum mechanics to cryptography has led to a remarkable new dimension in secret communication. As a result of these new developments, it is now possible to construct cryptographic communication systems which detect unauthorized eavesdropping should it oc ..."
Abstract

Cited by 10 (2 self)
 Add to MetaCart
The recent application of the principles of quantum mechanics to cryptography has led to a remarkable new dimension in secret communication. As a result of these new developments, it is now possible to construct cryptographic communication systems which detect unauthorized eavesdropping should it occur, and which give a guarantee of no eavesdropping should it not occur. Contents 1 Cryptographic systems before quantum cryptography 3 2 Preamble to quantum cryptography 7 Partially supported by ARL Contract #DAAL0195P1884, ARO Grant #P38804PH QC, and the LOOP Fund. 3 The BB84 quantum cryptographic protocol without noise 10 3.1 Stage 1. Communication over a quantum channel . . . . . . . 12 3.2 Stage 2. Communication in two phases over a public channel . 14 3.2.1 Phase 1 of Stage 2. Extraction of raw key . . . . . . . 14 3.2.2 Phase 2 of Stage 2. Detection of Eve's intrusion via error detection . . . . . . . . . . . . . . . . . . . . . . 15 4 The BB84 quantum cryptographic pr...
New Bounds on the Expected Length of OnetoOne Codes
 IEEE Trans. on Information Theory
"... In this correspondence we provide new bounds on the expected length L of a binary onetoone code for a discrete random variable X with entropy H . We prove that L H \Gamma log(H + 1) \Gamma H log(1 + 1=H). This bound improves on previous results. Furthermore, we provide upper bounds on the expecte ..."
Abstract

Cited by 7 (1 self)
 Add to MetaCart
(Show Context)
In this correspondence we provide new bounds on the expected length L of a binary onetoone code for a discrete random variable X with entropy H . We prove that L H \Gamma log(H + 1) \Gamma H log(1 + 1=H). This bound improves on previous results. Furthermore, we provide upper bounds on the expected length of the best code as function of H and the most likely source letter probability. Index Terms  Source coding, onetoone codes, nonprefix codes. 1 Introduction Let X be a discrete random variable which assumes values on a countable support set X . A binary encoding for X is a function that maps each element of X to a binary codeword. Without loss of generality, assume that X = f1; 2; :::; Ng, where N can be infinite. The probability that X takes the value i is p i . Throughout this paper we assume, without loss of generality, that p i p i+1 . Given an encoding, the expected length of the encoding is L(X) = N X i=1 p i n i (1) where n i is the length of the codeword used t...
Optimal lossless data compression: Nonasymptotics and asymptotics
 IEEE Transactions on Information Theory
, 2014
"... Abstract — This paper provides an extensive study of the behavior of the best achievable rate (and other related fundamental limits) in variablelength strictly lossless compression. In the nonasymptotic regime, the fundamental limits of fixedtovariable lossless compression with and without pref ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
(Show Context)
Abstract — This paper provides an extensive study of the behavior of the best achievable rate (and other related fundamental limits) in variablelength strictly lossless compression. In the nonasymptotic regime, the fundamental limits of fixedtovariable lossless compression with and without prefix constraints are shown to be tightly coupled. Several precise, quantitative bounds are derived, connecting the distribution of the optimal code lengths to the source information spectrum, and an exact analysis of the best achievable rate for arbitrary sources is given. Fine asymptotic results are proved for arbitrary (not necessarily prefix) compressors on general mixing sources. Nonasymptotic, explicit Gaussian approximation bounds are established for the best achievable rate on Markov sources. The source dispersion and the source varentropy rate are defined and characterized. Together with the entropy rate, the varentropy rate serves to tightly approximate the fundamental nonasymptotic limits of fixedtovariable compression for all but very small block lengths. Index Terms — Lossless data compression, fixedtovariable source coding, fixedtofixed source coding, entropy, finiteblock length fundamental limits, central limit theorem, Markov sources, varentropy, minimal coding variance, source dispersion. I. FUNDAMENTAL LIMITS A. Asymptotics: Entropy Rate For a random source X = {PXn}, assumed for simplicity to take values in a finite alphabetA, the minimum asymptotically achievable source coding rate (bits per source sample) is the entropy rate, H (X) = lim n→∞