Results 1  10
of
21
Complexity distortion theory
 in Proc. IEEE Int. Symp. Information Theory
, 1997
"... Abstract—Complexity distortion theory (CDT) is a mathematical framework providing a unifying perspective on media representation. The key component of this theory is the substitution of the decoder in Shannon’s classical communication model with a universal Turing machine. Using this model, the math ..."
Abstract

Cited by 23 (2 self)
 Add to MetaCart
Abstract—Complexity distortion theory (CDT) is a mathematical framework providing a unifying perspective on media representation. The key component of this theory is the substitution of the decoder in Shannon’s classical communication model with a universal Turing machine. Using this model, the mathematical framework for examining the efficiency of coding schemes is the algorithmic or Kolmogorov complexity. CDT extends this framework to include distortion by defining the complexity distortion function. We show that despite their different natures, CDT and rate distortion theory (RDT) predict asymptotically the same results, under stationary and ergodic assumptions. This closes the circle of representation models, from probabilistic models of information proposed by Shannon in information and rate distortion theories, to deterministic algorithmic models, proposed by Kolmogorov in Kolmogorov complexity theory and its extension to lossy source coding, CDT. Index Terms—Kolmogorov complexity, Markov types, rate distortion function, universal coding. I.
Algorithmic Complexity and Stochastic Properties of Finite Binary Sequences
, 1999
"... This paper is a survey of concepts and results related to simple Kolmogorov complexity, prefix complexity and resourcebounded complexity. We also consider a new type of complexity statistical complexity closely related to mathematical statistics. Unlike other discoverers of algorithmic complexit ..."
Abstract

Cited by 17 (0 self)
 Add to MetaCart
This paper is a survey of concepts and results related to simple Kolmogorov complexity, prefix complexity and resourcebounded complexity. We also consider a new type of complexity statistical complexity closely related to mathematical statistics. Unlike other discoverers of algorithmic complexity, A. N. Kolmogorov's leading motive was developing on its basis a mathematical theory more adequately substantiating applications of probability theory, mathematical statistics and information theory. Kolmogorov wanted to deduce properties of a random object from its complexity characteristics without use of the notion of probability. In the first part of this paper we present several results in this direction. Though the subsequent development of algorithmic complexity and randomness was different, algorithmic complexity has successful applications in a traditional probabilistic framework. In the second part of the paper we consider applications to the estimation of parameters and the definition of Bernoulli sequences. All considerations have finite combinatorial character. 1.
Generativity and Systematicity in Neural Network Combinatorial Learning
, 1993
"... This thesis addresses a set of problems faced by connectionist learning that have originated from the observation that connectionist cognitive models lack two fundamental properties of the mind: Generativity, stemming from the boundless cognitive competence one can exhibit, and systematicity, due to ..."
Abstract

Cited by 9 (0 self)
 Add to MetaCart
This thesis addresses a set of problems faced by connectionist learning that have originated from the observation that connectionist cognitive models lack two fundamental properties of the mind: Generativity, stemming from the boundless cognitive competence one can exhibit, and systematicity, due to the existence of symmetries within them. Such properties have seldom been seen in neural networks models, which have typically suffered from problems of inadequate generalization, as examplified both by small number of generalizations relative to training set sizes and heavy interference between newly learned items and previously learned information. Symbolic theories, arguing that mental representations have syntactic and semantic structure built from structured combinations of symbolic constituents, can in principle account for these properties (both arise from the sensitivity of structured semantic content with a generative and systematic syntax). This thesis studies the question of whe...
Some Notes On Rissanen's Stochastic Complexity
, 1996
"... A new version of stochastic complexity for a parametric statistical model is derived, based on a class of twopart codes. We show that choosing the quantization in the first step according to the Fisher information is optimal and we compare our approach to a recent result of Rissanen [10]. Applicati ..."
Abstract

Cited by 9 (2 self)
 Add to MetaCart
A new version of stochastic complexity for a parametric statistical model is derived, based on a class of twopart codes. We show that choosing the quantization in the first step according to the Fisher information is optimal and we compare our approach to a recent result of Rissanen [10]. Application to robust regression model selection is presented.
A Quick Glance at Quantum Cryptography
, 1998
"... The recent application of the principles of quantum mechanics to cryptography has led to a remarkable new dimension in secret communication. As a result of these new developments, it is now possible to construct cryptographic communication systems which detect unauthorized eavesdropping should it oc ..."
Abstract

Cited by 8 (2 self)
 Add to MetaCart
The recent application of the principles of quantum mechanics to cryptography has led to a remarkable new dimension in secret communication. As a result of these new developments, it is now possible to construct cryptographic communication systems which detect unauthorized eavesdropping should it occur, and which give a guarantee of no eavesdropping should it not occur. Contents 1 Cryptographic systems before quantum cryptography 3 2 Preamble to quantum cryptography 7 Partially supported by ARL Contract #DAAL0195P1884, ARO Grant #P38804PH QC, and the LOOP Fund. 3 The BB84 quantum cryptographic protocol without noise 10 3.1 Stage 1. Communication over a quantum channel . . . . . . . 12 3.2 Stage 2. Communication in two phases over a public channel . 14 3.2.1 Phase 1 of Stage 2. Extraction of raw key . . . . . . . 14 3.2.2 Phase 2 of Stage 2. Detection of Eve's intrusion via error detection . . . . . . . . . . . . . . . . . . . . . . 15 4 The BB84 quantum cryptographic pr...
New Bounds on the Expected Length of OnetoOne Codes
 IEEE Trans. on Information Theory
"... In this correspondence we provide new bounds on the expected length L of a binary onetoone code for a discrete random variable X with entropy H . We prove that L H \Gamma log(H + 1) \Gamma H log(1 + 1=H). This bound improves on previous results. Furthermore, we provide upper bounds on the expecte ..."
Abstract

Cited by 7 (1 self)
 Add to MetaCart
In this correspondence we provide new bounds on the expected length L of a binary onetoone code for a discrete random variable X with entropy H . We prove that L H \Gamma log(H + 1) \Gamma H log(1 + 1=H). This bound improves on previous results. Furthermore, we provide upper bounds on the expected length of the best code as function of H and the most likely source letter probability. Index Terms  Source coding, onetoone codes, nonprefix codes. 1 Introduction Let X be a discrete random variable which assumes values on a countable support set X . A binary encoding for X is a function that maps each element of X to a binary codeword. Without loss of generality, assume that X = f1; 2; :::; Ng, where N can be infinite. The probability that X takes the value i is p i . Throughout this paper we assume, without loss of generality, that p i p i+1 . Given an encoding, the expected length of the encoding is L(X) = N X i=1 p i n i (1) where n i is the length of the codeword used t...
New Lower Bounds on the Cost of Binary Search Trees
 Theoretical Computer Science
, 1993
"... In this paper we provide new lower bounds on the cost of binary search trees. The bounds are expressed in terms of the entropy of the probability distribution, the number of elements and the probability that a search is successfully. Most of our lower bounds are derived by means of a new technique w ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
In this paper we provide new lower bounds on the cost of binary search trees. The bounds are expressed in terms of the entropy of the probability distribution, the number of elements and the probability that a search is successfully. Most of our lower bounds are derived by means of a new technique which exploits the relation between trees and codes. Our lower bounds compare favorably with known limitations. We also provide an achievable upper bound on the Kraft sum generalized to the internal nodes of a tree. This improves on a previous result. This work was partially supported by the National Council of Research (C.N.R.) under grant 91.02326.CT12 and by M.U.R.S.T. in the framework of Project: "Algoritmi, Sistemi di Calcolo e Strutture Informative". y Department of Computer Science, Columbia University, New York, N.Y. 10027 z Dipartimento di Informatica ed Applicazioni, Universit'a di Salerno, 84081 Baronissi (SA)  Italy 1 Introduction Binary search trees are a widely used da...
Minimum Expected Length of FixedtoVariable Lossless Compression of Memoryless Sources
"... Abstract—Conventional wisdom states that the minimum expected length for fixedtovariable length encoding of an nblock memoryless source with entropy H grows as nH+O(1). However, this performance is obtained under the constraint that the code assigned to the whole nblock is a prefix code. Droppin ..."
Abstract

Cited by 4 (3 self)
 Add to MetaCart
Abstract—Conventional wisdom states that the minimum expected length for fixedtovariable length encoding of an nblock memoryless source with entropy H grows as nH+O(1). However, this performance is obtained under the constraint that the code assigned to the whole nblock is a prefix code. Dropping this unnecessary constraint we show that the minimum expected length grows as nH − 1 log n + O(1) 2 unless the source is equiprobable. I.