Results 1  10
of
66
The Context Tree Weighting Method: Basic Properties
 IEEE Transactions on Information Theory
, 1995
"... We describe a sequential universal data compression procedure for binary tree sources that performs the "double mixture". Using a context tree, this method weights in an efficient recursive way the coding distributions corresponding to all bounded memory tree sources, and achieves a desira ..."
Abstract

Cited by 105 (1 self)
 Add to MetaCart
(Show Context)
We describe a sequential universal data compression procedure for binary tree sources that performs the "double mixture". Using a context tree, this method weights in an efficient recursive way the coding distributions corresponding to all bounded memory tree sources, and achieves a desirable coding distribution for tree sources with an unknown model and unknown parameters. Computational and storage complexity of the proposed procedure are both linear in the source sequence length. We derive a natural upper bound on the cumulative redundancy of our method for individual sequences. The three terms in this bound can be identified as coding, parameter and model redundancy. The bound holds for all source sequence lengths, not only for asymptotically large lengths. The analysis that leads to this bound is based on standard techniques and turns out to be extremely simple. Our upper bound on the redundancy shows that the proposed context tree weighting procedure is optimal in the sense that i...
Bayesian Methods: General Background
, 1986
"... : We note the main points of history, as a framework on which to hang many background remarks concerning the nature and motivation of Bayesian/Maximum Entropy methods. Experience has shown that these are needed in order to understand recent work and problems. A more complete account of the history, ..."
Abstract

Cited by 44 (1 self)
 Add to MetaCart
: We note the main points of history, as a framework on which to hang many background remarks concerning the nature and motivation of Bayesian/Maximum Entropy methods. Experience has shown that these are needed in order to understand recent work and problems. A more complete account of the history, with many more details and references, is given in Jaynes (1978). The following discussion is essentially nontechnical; the aim is only to convey a little introductory "feel" for our outlook, purpose, and terminology, and to alert newcomers to common pitfalls of misunderstanding. HERODOTUS 2 BERNOULLI 2 BAYES 4 LAPLACE 5 JEFFREYS 6 COX 8 SHANNON 9 COMMUNICATION DIFFICULTIES 10 IS OUR LOGIC OPEN OR CLOSED? 13 DOWNWARD ANALYSIS IN STATISTICAL MECHANICS 14 CURRENT PROBLEMS 15 REFERENCES 17 ? Presented at the Fourth Annual Workshop on Bayesian/Maximum Entropy Methods, University of Calgary, August 1984. In the Proceedings Volume, Maximum Entropy and Bayesian Methods in Applied Statistics, J. H....
A Formal Definition of Intelligence Based on an Intensional Variant of Algorithmic Complexity
 In Proceedings of the International Symposium of Engineering of Intelligent Systems (EIS'98
, 1998
"... Machine Due to the current technology of the computers we can use, we have chosen an extremely abridged emulation of the machine that will effectively run the programs, instead of more proper languages, like lcalculus (or LISP). We have adapted the "toy RISC" machine of [Hernndez & H ..."
Abstract

Cited by 38 (19 self)
 Add to MetaCart
Machine Due to the current technology of the computers we can use, we have chosen an extremely abridged emulation of the machine that will effectively run the programs, instead of more proper languages, like lcalculus (or LISP). We have adapted the "toy RISC" machine of [Hernndez & Hernndez 1993] with two remarkable features inherited from its objectoriented coding in C++: it is easily tunable for our needs, and it is efficient. We have made it even more reduced, removing any operand in the instruction set, even for the loop operations. We have only three registers which are AX (the accumulator), BX and CX. The operations Q b we have used for our experiment are in Table 1: LOOPTOP Decrements CX. If it is not equal to the first element jump to the program top.
On the relationship between complexity and entropy for Markov chains and regular languages
 Complex Systems
, 1991
"... Abstract. Using the pastfuture mutual information as a measure of complexity, the relation between the complexity and the Shannon entropy is determined analytically for sequences generated by Markov chains and regular languages. It is emphasized that, given an entropy value, there are many possible ..."
Abstract

Cited by 30 (2 self)
 Add to MetaCart
Abstract. Using the pastfuture mutual information as a measure of complexity, the relation between the complexity and the Shannon entropy is determined analytically for sequences generated by Markov chains and regular languages. It is emphasized that, given an entropy value, there are many possible complexity values, and vice versa; that is, the relationship between complexity and entropy is not onetoone, but rather manytoone or onetomany. It is also emphasized that there are structures in the complexityversusentropy plots, and these structures depend on the details of a Markov chain or a regular language grammar. 1.
On the Knowledge Complexity of ...
 In 37th FOCS
, 1996
"... We show that if a language has an interactive proof of logarithmic statistical knowledgecomplexity, then it belongs to the class AM \ co AM. Thus, if the polynomial time hierarchy does not collapse, then NPcomplete languages do not have logarithmic knowledge complexity. Prior to this work, ther ..."
Abstract

Cited by 27 (7 self)
 Add to MetaCart
(Show Context)
We show that if a language has an interactive proof of logarithmic statistical knowledgecomplexity, then it belongs to the class AM \ co AM. Thus, if the polynomial time hierarchy does not collapse, then NPcomplete languages do not have logarithmic knowledge complexity. Prior to this work, there was no indication that would contradict NP languages being proven with even one bit of knowledge. Our result is a common generalization of two previous results: The rst asserts that statistical zero knowledge is contained in AM \ co AM [F89, AH91], while the second asserts that the languages recognizable in logarithmic statistical knowledge complexity are in BPP NP [GOP94]. Next, we consider the relation between the error probability and the knowledge complexity of an interactive proof. Note that reducing the error probability via repetition is not free: it may increase the knowledge complexity. We show that if the negligible error probability (n) is less than 2 3k(n) (where k(n) is the knowledge complexity) then the language proven is in the third level of the polynomial time hierarchy (specically, it is in AM NP . In the standard setting of negligible error probability, there exist PSPACEcomplete languages which have sublinear knowledge complexity. However, if we insist, for example, that the error probability is less than 2 n 2 , then PSPACEcomplete languages do not have subquadratic knowledge complexity, unless PSPACE= P 3 . In order to prove our main result, we develop an AM protocol for checking that a samplable distribution D has a given entropy h. For any fractions ; , the verier runs in time polynomial in 1= and log(1=) and fails with probability at most to detect an additive error in the entropy. We believe that this ...
Universal Bound on the Performance of Lattice Codes
 IEEE TRANS. INFORM. THEORY
, 1999
"... We present a lower bound on the probability of symbol error for maximumlikelihood decoding of lattices and lattice codes on a Gaussian channel. The bound is tight for error probabilities and signaltonoise ratios of practical interest, as opposed to most existing bounds that become tight asymptoti ..."
Abstract

Cited by 26 (0 self)
 Add to MetaCart
(Show Context)
We present a lower bound on the probability of symbol error for maximumlikelihood decoding of lattices and lattice codes on a Gaussian channel. The bound is tight for error probabilities and signaltonoise ratios of practical interest, as opposed to most existing bounds that become tight asymptotically for high signaltonoise ratios. The bound is also universal; it provides a limit on the highest possible coding gain that may be achieved, at specific symbol error probabilities, using any lattice or lattice code in n dimensions. In particular, it is shown that the effective coding gains of the densest known lattices are much lower than their nominal coding gains. The asymptotic (as n !1) behavior of the new bound is shown to coincide with the Shannon limit for Gaussian channels.
Supporting Security Requirements In Multilevel RealTime Databases
 PROC. OF IEEE SYMP. ON SECURITY AND PRIVACY
, 1995
"... Database systems for realtime applications must satisfy timing constraints associated with transactions, in addition to maintaining data consistency. In addition to realtime requirements, security is usually required in many applications. Multilevel security requirements introduce a new dimension ..."
Abstract

Cited by 25 (3 self)
 Add to MetaCart
Database systems for realtime applications must satisfy timing constraints associated with transactions, in addition to maintaining data consistency. In addition to realtime requirements, security is usually required in many applications. Multilevel security requirements introduce a new dimension to transaction processing in realtime database systems. In this paper, we argue that due to the conflicting goals of each requirement, tradeoffs need to be made between security and timeliness. We first define capacity, a measure of the degree to which security is being satisfied by a system. A secure twophase locking protocol is then described and a scheme is proposed to allow partial violations of security for improved timeliness. The capacity of the resultant covert channel is derived and a feedback control scheme is proposed that does not allow the capacity to exceed a specified upper bound.