Results 1  10
of
218
Hidden Markov processes
 IEEE Trans. Inform. Theory
, 2002
"... Abstract—An overview of statistical and informationtheoretic aspects of hidden Markov processes (HMPs) is presented. An HMP is a discretetime finitestate homogeneous Markov chain observed through a discretetime memoryless invariant channel. In recent years, the work of Baum and Petrie on finite ..."
Abstract

Cited by 173 (3 self)
 Add to MetaCart
Abstract—An overview of statistical and informationtheoretic aspects of hidden Markov processes (HMPs) is presented. An HMP is a discretetime finitestate homogeneous Markov chain observed through a discretetime memoryless invariant channel. In recent years, the work of Baum and Petrie on finitestate finitealphabet HMPs was expanded to HMPs with finite as well as continuous state spaces and a general alphabet. In particular, statistical properties and ergodic theorems for relative entropy densities of HMPs were developed. Consistency and asymptotic normality of the maximumlikelihood (ML) parameter estimator were proved under some mild conditions. Similar results were established for switching autoregressive processes. These processes generalize HMPs. New algorithms were developed for estimating the state, parameter, and order of an HMP, for universal coding and classification of HMPs, and for universal decoding of hidden Markov channels. These and other related topics are reviewed in this paper. Index Terms—Baum–Petrie algorithm, entropy ergodic theorems, finitestate channels, hidden Markov models, identifiability, Kalman filter, maximumlikelihood (ML) estimation, order estimation, recursive parameter estimation, switching autoregressive processes, Ziv inequality. I.
A probabilistic algorithm for kSAT and constraint satisfaction problems
 In Proceedings of the 40th Annual IEEE Symposium on Foundations of Computer Science, FOCS'99
, 1999
"... We present a simple probabilistic algorithm for solving kSAT, and more generally, for solving constraint satisfaction problems (CSP). The algorithm follows a simple localsearch paradigm (cf. [9]): randomly guess an initial assignment and then, guided by those clauses (constraints) that are not sati ..."
Abstract

Cited by 127 (4 self)
 Add to MetaCart
We present a simple probabilistic algorithm for solving kSAT, and more generally, for solving constraint satisfaction problems (CSP). The algorithm follows a simple localsearch paradigm (cf. [9]): randomly guess an initial assignment and then, guided by those clauses (constraints) that are not satisfied, by successively choosing a random literal from such a clause and flipping the corresponding bit, try to find a satisfying assignment. If no satisfying assignment is found after O(n) steps, start over again. Our analysis shows that for any satisfiable kCNF formula with n variables this process has to be repeated only t times, on the average, to find a satisfying assignment, where t is within a polynomial factor of (2(1, 1=k)) n. This is the fastest (and also the simplest) algorithm for 3SAT known up to date. We consider also the more general case of a CSP with n variables, each variable taking at most d values, and constraints of order l, and analyze the complexity of the corresponding (generalized) algorithm. It turns out that any CSP can be solved with complexity at most (d (1, 1=l) +&quot;) n. 1. Algorithms for kSAT Several algorithms have been designed for kSAT, and some in particular for the special case 3SAT which beat the naive 2 n bound that is obtained by trying all potential 2 n many assignments for the n variables in the input formula. The following list summarizes the known results for kSAT and adds our new one, indicated by [*]. A constant c in the list means that there is an algorithm of the given type (deterministic or probabilistic) with complexity within a polynomial factor of c n.
Generalization Performance of Regularization Networks and Support . . .
 IEEE TRANSACTIONS ON INFORMATION THEORY
, 2001
"... We derive new bounds for the generalization error of kernel machines, such as support vector machines and related regularization networks by obtaining new bounds on their covering numbers. The proofs make use of a viewpoint that is apparently novel in the field of statistical learning theory. The hy ..."
Abstract

Cited by 70 (18 self)
 Add to MetaCart
We derive new bounds for the generalization error of kernel machines, such as support vector machines and related regularization networks by obtaining new bounds on their covering numbers. The proofs make use of a viewpoint that is apparently novel in the field of statistical learning theory. The hypothesis class is described in terms of a linear operator mapping from a possibly infinitedimensional unit ball in feature space into a finitedimensional space. The covering numbers of the class are then determined via the entropy numbers of the operator. These numbers, which characterize the degree of compactness of the operator, can be bounded in terms of the eigenvalues of an integral operator induced by the kernel function used by the machine. As a consequence, we are able to theoretically explain the effect of the choice of kernel function on the generalization performance of support vector machines.
Causality, Feedback And Directed Information
, 1990
"... It is shown that the "usual definition" of a discrete memoryless channel (DMC) in fact prohibits the use of feedback. The difficulty stems from the confusion of causality and statistical dependence. An adequate definition of a DMC is given, as well as a definition of using a channel withou ..."
Abstract

Cited by 65 (0 self)
 Add to MetaCart
It is shown that the "usual definition" of a discrete memoryless channel (DMC) in fact prohibits the use of feedback. The difficulty stems from the confusion of causality and statistical dependence. An adequate definition of a DMC is given, as well as a definition of using a channel without feedback. A definition, closely based on an old idea of Marko, is given for the directed information flowing from one sequence to another. This directed information is used to give a simple proof of the wellknown fact that the use of feedback cannot increase the capacity of a DMC. It is shown that, when feedback is present, directed information is a more useful quantity than the traditional mutual information. INTRODUCTION Information theory has enjoyed little success in dealing with systems that incorporate feedback. Perhaps it was for this reason that C.E. Shannon chose feedback as the subject of the first Shannon Lecture, which he delivered at the 1973 IEEE International Symposium on Informati...
Quantum mechanics as quantum information (and only a little more), Quantum Theory: Reconsideration of Foundations
, 2002
"... In this paper, I try once again to cause some goodnatured trouble. The issue remains, when will we ever stop burdening the taxpayer with conferences devoted to the quantum foundations? The suspicion is expressed that no end will be in sight until a means is found to reduce quantum theory to two or ..."
Abstract

Cited by 61 (6 self)
 Add to MetaCart
In this paper, I try once again to cause some goodnatured trouble. The issue remains, when will we ever stop burdening the taxpayer with conferences devoted to the quantum foundations? The suspicion is expressed that no end will be in sight until a means is found to reduce quantum theory to two or three statements of crisp physical (rather than abstract, axiomatic) significance. In this regard, no tool appears better calibrated for a direct assault than quantum information theory. Far from a strained application of the latest fad to a timehonored problem, this method holds promise precisely because a large part—but not all—of the structure of quantum theory has always concerned information. It is just that the physics community needs reminding. This paper, though takingquantph/0106166 as its core, corrects one mistake and offers several observations beyond the previous version. In particular, I identify one element of quantum mechanics that I would not label a subjective term in the theory—it is the integer parameter D traditionally ascribed to a quantum system via its Hilbertspace dimension. 1
Channel coding rate in the finite blocklength regime
 IEEE TRANS. INF. THEORY
, 2010
"... This paper investigates the maximal channel coding rate achievable at a given blocklength and error probability. For general classes of channels new achievability and converse bounds are given, which are tighter than existing bounds for wide ranges of parameters of interest, and lead to tight appro ..."
Abstract

Cited by 43 (9 self)
 Add to MetaCart
This paper investigates the maximal channel coding rate achievable at a given blocklength and error probability. For general classes of channels new achievability and converse bounds are given, which are tighter than existing bounds for wide ranges of parameters of interest, and lead to tight approximations of the maximal achievable rate for blocklengths as short as 100. It is also shown analytically that the maximal rate achievable with error probability is closely approximated by where is the capacity, is a characteristic of the channel referred to as channel dispersion, and is the complementary Gaussian cumulative distribution function.
Alignmentfree sequence comparisona review
 Bioinformatics
, 2003
"... Motivation: Genetic recombination and, in particular, genetic shuffling are at odds with sequence comparison by alignment, which assumes conservation of contiguity between homologous segments. A variety of theoretical foundations are being used to derive alignmentfree methods that overcome this lim ..."
Abstract

Cited by 42 (5 self)
 Add to MetaCart
Motivation: Genetic recombination and, in particular, genetic shuffling are at odds with sequence comparison by alignment, which assumes conservation of contiguity between homologous segments. A variety of theoretical foundations are being used to derive alignmentfree methods that overcome this limitation. The formulation of alternative metrics for dissimilarity between sequences and their algorithmic implementations are reviewed. Results: The overwhelming majority of work on alignmentfree sequence has taken place in the past two decades, with most reports published in the past 5 years. Two main categories of methods have been proposed—methods based on word (oligomer) frequency, and methods that do not require resolving the sequence with fixed word length segments. The first category is based on the statistics of word frequency, on the distances defined in a Cartesian space defined by the frequency vectors, and on the information content of frequency distribution. The second category includes the use of Kolmogorov complexity and Chaos Theory. Despite their low visibility, alignmentfree metrics are in fact already widely used as preselection filters for alignmentbased querying of large applications. Recent work is furthering their usage as a scaleindependent methodology that is capable of recognizing homology when loss of contiguity is beyond the possibility of alignment. Availability: Most of the alignmentfree algorithms reviewed were implemented in MATLAB code and are available
Automatic discovery and quantification of information leaks
 IN: IEEE SYMPOSIUM ON SECURITY AND PRIVACY
, 2009
"... Informationflow analysis is a powerful technique for reasoning about the sensitive information exposed by a program during its execution. We present the first automatic method for informationflow analysis that discovers what information is leaked and computes its comprehensive quantitative interpr ..."
Abstract

Cited by 39 (3 self)
 Add to MetaCart
Informationflow analysis is a powerful technique for reasoning about the sensitive information exposed by a program during its execution. We present the first automatic method for informationflow analysis that discovers what information is leaked and computes its comprehensive quantitative interpretation. The leaked information is characterized by an equivalence relation on secret artifacts, and is represented by a logical assertion over the corresponding program variables. Our measurement procedure computes the number of discovered equivalence classes and their sizes. This provides a basis for computing a set of quantitative properties, which includes all established informationtheoretic measures in quantitative informationflow. Our method exploits an inherent connection between formal models of qualitative informationflow and program verification techniques. We provide an implementation of our method that builds upon existing tools for program verification and informationtheoretic analysis. Our experimental evaluation indicates the practical applicability of the presented method.
Some equivalences between Shannon entropy and Kolmogorov complexity
 IEEE Transactions on Information Theory
, 1978
"... that the average codeword length L,:, for the best onetoone (not necessBluy uniquely decodable) code for X is shorter than the average codeword length L,, for the best mdquely decodable code by no more thau (log2 log, n) + 3. Let Y be a random variable taking OII a fiite or countable number of val ..."
Abstract

Cited by 30 (0 self)
 Add to MetaCart
that the average codeword length L,:, for the best onetoone (not necessBluy uniquely decodable) code for X is shorter than the average codeword length L,, for the best mdquely decodable code by no more thau (log2 log, n) + 3. Let Y be a random variable taking OII a fiite or countable number of values and having entropy H. Then it is proved that L,:,>Hlog2 (H+l)log, log2 (H+l)...6. Some relations are eatahlished amoug the Kolmogorov, Cl&in, and extension complexities. Finally it is shown that, for all computable probability distributions, the universal prefix codes associated with the conditional Chaitin complexity have expected codeword length within a constant of the Shannon entropy. I.