Results 11  20
of
20
LOSSLESS COMPRESSION AND ALPHABET SIZE
, 2006
"... Lossless data compression through exploiting redundancy in a sequence of symbols is a wellstudied field in computer science and information theory. One way to achieve compression is to statistically model the data and estimate model parameters. In practice, most general purpose data compression alg ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
Lossless data compression through exploiting redundancy in a sequence of symbols is a wellstudied field in computer science and information theory. One way to achieve compression is to statistically model the data and estimate model parameters. In practice, most general purpose data compression algorithms model the data as stationary sequences of 8bit symbols. While this model fits very well the currently used computer architectures and the vast majority of information representation standards, other models may have both computational and information theoretic merits in being more efficient in implementation or fitting some data closer. In addition, compression algorithms based on the 8 bit symbol model perform very poorly on data represented by binary sequences not aligned with byte boundaries either because the fixed symbol length is not a multiple of 8 bits (e.g. DNA sequences) or because the symbols of the source are encoded into bit sequences of variable length. Throughout this thesis, we assume that the source alphabet consists of blocks of equal size of elementary symbols (typically bits), and address the impact of this
An algebraic approach to information theory
"... Abstract—This work proposes an algebraic model for classical information theory. We first give an algebraic model of probability theory. Information theoretic constructs are based on this model. In addition to theoretical insights provided by our model one obtains new computational and analytical to ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
(Show Context)
Abstract—This work proposes an algebraic model for classical information theory. We first give an algebraic model of probability theory. Information theoretic constructs are based on this model. In addition to theoretical insights provided by our model one obtains new computational and analytical tools. Several important theorems of classical probability and information theory are presented in the algebraic framework. I.
Improvement of Upper Bound to the Optimal Average Cost of the Variable Length Binary Code
, 1999
"... 9.96> x. For a tree T , denote its leaf set by #T := ({#}# {za : z # T,a # A})\T . (For example, T = {#, 0} is a tree and then #T = {00, 01, 1}.) Given probabilities D := {p(1),p(2),...,p(n)}, we want to design a tree T with n leaves and a permutation # of {1, 2,...,n}, suc h th ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
9.96> x. For a tree T , denote its leaf set by #T := ({#}# {za : z # T,a # A})\T . (For example, T = {#, 0} is a tree and then #T = {00, 01, 1}.) Given probabilities D := {p(1),p(2),...,p(n)}, we want to design a tree T with n leaves and a permutation # of {1, 2,...,n}, suc h that an averagecag defined bellow is minimized. Todesc[F e formally, we denote thelexic]]FOK]S order by ord : #T #{1, 2,...,n}. (In the above exam
Data processing in quantum information theory
, 1997
"... The strengthened data processing inequality have been proved. The general theory have been illustrated on the simple example. Quantum information theory is a new field with potential applications for the conceptual foundation of quantum mechanics. It appears to be the basis for a proper understandin ..."
Abstract
 Add to MetaCart
(Show Context)
The strengthened data processing inequality have been proved. The general theory have been illustrated on the simple example. Quantum information theory is a new field with potential applications for the conceptual foundation of quantum mechanics. It appears to be the basis for a proper understanding of the emerging fields of quantum computation, communication and cryptography [14]. Quantum information theory concerned with quantum bits (qubits) rather than bits. Qubits can exist in superposition or entanglement states with other qubits, a notion completely inaccessible for classical mechanics. More general, quantum information theory contains two distinct types of problem. The first type describes transmission of classical information through a quantum channel (the channel can be noisy or noiseless). In such scheme bits encoded as some quantum states and only this states or its tensor products are transmitted. In the second case arbitrary superposition of this states or entanglement states are transmitted. In the first case the problems can be solved by methods of classical information theory, but in the second case new physical representations are needed. Mutual information is the most important ingredient of information theory. In classical theory this value was introduced by C.Shannon [9]. The mutual information between two ensembles of random variables X, Y (for example this ensembles can be input and output for a noisy channel) I(X, Y) = H(Y) − H(Y/X), (1) 1 is the decrease of the entropy of X due to the knowledge about Y, and conversely with interchanging X and Y. Here H(Y) and H(Y/X) are Shannon entropy and mutual entropy [9]. Mutual information in the quantum case must take into account the specific character of the quantum information as it is described above. The first reasonable definition of this quantity was introduced by B.Schumacher and M.P.Nielsen [2]. Suppose a quantum system with density matrix ρ = ∑ piψi〉〈ψi, ∑ pi = 1. (2) i i We only assume that 〈ψiψi 〉 = 1 and the states may be nonorthogonal. The noisy quantum channel can be described by a general quantum evaluation operator ˆ S with kraussian representation
An Approach to Capacity Analysis of Coarsely Managed Wideband Multiple Access Systems
"... We consider the problem of coarsely managed multiple access wideband systems, where each user can control its own transmission power and rate, independent of other users ’ actions, according to the policy specified by central controlling agent: base station. Even with such coarse coordination, multi ..."
Abstract
 Add to MetaCart
(Show Context)
We consider the problem of coarsely managed multiple access wideband systems, where each user can control its own transmission power and rate, independent of other users ’ actions, according to the policy specified by central controlling agent: base station. Even with such coarse coordination, multiuser detection enables a system superior to any orthogonal division system like TDMA. We fully characterise the set of such available coarse management policies for wideband systems: robust spectral efficiency slope region. Finally, we show that slopes on the boundary of robust slope region lead to an elegant interpretation of robust slope region in terms of awarding protected receiver dimensions to each user. 1
Randomness in the Primary Structure of Protein: Methods and Implications
"... It is no doubt that the evolutionary process is affected by chance, but the question is to what extent the chance plays its role. The random analysis can throw light on the underlying reasoning for the primary structure of proteins. With the use of random principles, we have explored three approache ..."
Abstract
 Add to MetaCart
It is no doubt that the evolutionary process is affected by chance, but the question is to what extent the chance plays its role. The random analysis can throw light on the underlying reasoning for the primary structure of proteins. With the use of random principles, we have explored three approaches to analyse protein primary structure, i.e. the randomness in the construction of aminoacid sequences, in the followup amino acid, and in the distribution of amino acid/amino acids. As the results, (i) we can evaluate the impact of chance on the composition of aminoacid sequences by comparing the measured probability/frequency with the predicted probability/ frequency; (ii) we can evaluate the impact of chance on the followup amino acid by comparing the Markov transition probability with the predicted conditional probability; and (iii) we can evaluate the effect of chance on the distribution of amino acids by comparing the real distribution probability with the theoretical distribution probability. These approaches can be used to quantitatively analyse the primary structure of intraprotein as well as interproteins, thus we can get more insights into the mechanisms of protein construction, mutation, and evolutionary process. Also, these approaches may have some potential use for development of new drugs.
On the Universality of Burnashev’s Error Exponent
, 2004
"... [6] V. Tarokh, N. Seshadri, and A. R. Calderbank, “Spacetime codes for ..."
(Show Context)
1 Codeword Distinguishability in Minimum Diversity Decoding
"... Abstract — We retake a codingtheoretic notion which goes back to Cl. Shannon: codeword distinguishability. This notion is standard in zeroerror information theory, but its bearing is definitely wider and it may help to better understand new forms of coding, as we argue below. In our approach, the ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract — We retake a codingtheoretic notion which goes back to Cl. Shannon: codeword distinguishability. This notion is standard in zeroerror information theory, but its bearing is definitely wider and it may help to better understand new forms of coding, as we argue below. In our approach, the underlying decoding principle is very simple and very general: one decodes by trying to minimise the diversity (in the simplest case the Hamming distance) between a codeword and the output sequence observed at the end of the noisy transmission channel. Symmetrically and equivalently, minimumdiversity decoders and codeword distinguishabilities may be replaced by maximumsimilarity decoders and codeword confusabilities. The operational meaning of codeword distinguishability is made clear by a reliability criterion, which generalises the wellknown criterion on minimum Hamming distances for errorcorrection codes. We investigate the formal properties of distinguishabilities versus diversities; these two notions are deeply related, and yet essentially different. An encoding theorem is put forward, which supports and suggests old and new code constructions. In a list of case studies, we examine channels with crossovers and erasures, or with crossovers, deletions and insertions, a channel of cryptographic interest, and the case of a few “odd distances” taken from DNA word design. Index Terms — : codeword distinguishability, codeword confusability, minimum diversity decoding, maximum similarity decoding, zeroerror information theory, erasure channels, edit distance, simple substitution ciphers, DNA string distances. I.
unknown title
"... Syndromecoding for the wiretap channel revisited Abstract — To communicate an rbit secret s through a wiretap channel, the syndrome coding strategy consists of choosing a linear transformation h and transmitting an nbit vector x such that h(x) = s. The receiver obtains a corrupted version of x ..."
Abstract
 Add to MetaCart
(Show Context)
Syndromecoding for the wiretap channel revisited Abstract — To communicate an rbit secret s through a wiretap channel, the syndrome coding strategy consists of choosing a linear transformation h and transmitting an nbit vector x such that h(x) = s. The receiver obtains a corrupted version of x and the eavesdropper an even more corrupted version of x: the (syndrome) function h should be chosen in such a way as to minimize both the length n of the transmitted vector and the information leakage to the eavesdropper. We give a refined analysis of the information leakage that involves mth moment methods. I.
Learning Theory, 2000. Logistic Regression, AdaBoost and Bregman Distances
"... Abstract. We give a unified account of boosting and logistic regression in which each learning problem is cast in terms of optimization of Bregman distances. The striking similarity of the two problems in this framework allows us to design and analyze algorithms for both simultaneously, and to easi ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract. We give a unified account of boosting and logistic regression in which each learning problem is cast in terms of optimization of Bregman distances. The striking similarity of the two problems in this framework allows us to design and analyze algorithms for both simultaneously, and to easily adapt algorithms designed for one problem to the other. For both problems, we give new algorithms and explain their potential advantages over existing methods. These algorithms can be divided into two types based on whether the parameters are iteratively updated sequentially (one at a time) or in parallel (all at once). We also describe a parameterized family of algorithms which interpolates smoothly between these two extremes. For all of the algorithms, we give convergence proofs using a general formalization of the auxiliaryfunction proof technique. As one of our sequentialupdate algorithms is equivalent to AdaBoost, this provides the first general proof of convergence for AdaBoost. We show that all of our algorithms generalize easily to the multiclass case, and we contrast the new algorithms with iterative scaling. We conclude with preliminary experimental results. 1