Results 1  10
of
16
On the Vocabulary of GrammarBased Codes and the Logical Consistency of Texts
, 2008
"... The article presents a new interpretation for Zipf’s law in natural language which relies on two areas of information theory. We reformulate the problem of grammarbased compression and investigate properties of strongly nonergodic stationary processes. The motivation for the joint discussion is to ..."
Abstract

Cited by 4 (3 self)
 Add to MetaCart
The article presents a new interpretation for Zipf’s law in natural language which relies on two areas of information theory. We reformulate the problem of grammarbased compression and investigate properties of strongly nonergodic stationary processes. The motivation for the joint discussion is to prove a proposition with a simple informal statement: If an nletter long text describes n β independent facts in a random but consistent way then the text contains at least n β /log n different words. In the formal statement, two specific postulates are adopted. Firstly, the words are understood as the nonterminal symbols of the shortest grammarbased encoding of the text. Secondly, the texts are assumed to be emitted by a nonergodic source, with the described facts being binary IID variables that are asymptotically predictable in a shiftinvariant way. The proof of the formal proposition applies several new tools. These
Information theory at the service of science. In
 of Bolyai Society Mathematical Studies
, 2007
"... Information theory is becoming more and more important for many fields. This is true for engineering and technologybased areas but also for more theoretically oriented sciences such as probability and statistics. Aspects of this development is first discussed at the nontechnical level with emphas ..."
Abstract

Cited by 4 (3 self)
 Add to MetaCart
Information theory is becoming more and more important for many fields. This is true for engineering and technologybased areas but also for more theoretically oriented sciences such as probability and statistics. Aspects of this development is first discussed at the nontechnical level with emphasis on the role of information theoretical games. The overall rationale is explained and central types of examples presented where the game theoretical approach is useful. The final section contains full proofs related to a subject of central importance for statistics, the estimation or updating by a posterior distribution which aims at minimizing divergence measured relative to a given prior.
Entropy and Equilibrium via Games of Complexity
"... It is suggested that thermodynamical equilibrium equals game theoretical equilibrium. Aspects of this thesis are discussed. The philosophy is consistent with maximum entropy thinking of Jaynes, but goes one step deeper by deriving the maximum entropy principle from an underlying game theoretical pri ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
It is suggested that thermodynamical equilibrium equals game theoretical equilibrium. Aspects of this thesis are discussed. The philosophy is consistent with maximum entropy thinking of Jaynes, but goes one step deeper by deriving the maximum entropy principle from an underlying game theoretical principle. The games introduced are based on measures of complexity. Entropy is viewed as minimal complexity. It is demonstrated that Tsallis entropy (qentropy) and Kaniadakis entropy (κentropy) can be obtained in this way, based on suitable complexity measures. A certain unifying effect is obtained by embedding these measures in a twoparameter family of entropy functions.
Properties of classical and quantum JensenShannon divergence
 Phys. Rev. A 2009
"... The JensenShannon divergence (JSD) is a symmetrized and smoothed version of the all important divergence measure of information theory, the KullbackLeibler divergence. It defines a true metric – precisely, it is the square of a metric. We prove a stronger result for a family of divergence measures ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
The JensenShannon divergence (JSD) is a symmetrized and smoothed version of the all important divergence measure of information theory, the KullbackLeibler divergence. It defines a true metric – precisely, it is the square of a metric. We prove a stronger result for a family of divergence measures based on the Tsallis entropy, that includes the JSD. Furthermore we elaborate on details of geometric properties of the JSD. Analogously, the quantum JensenShannon divergence (QJSD) is a symmetrized version of the quantum relative entropy that has recently been considered as a distance measure for quantum states. We prove for a new family of distance measures for states, including the QJSD, that each member is the square of a metric for all qubits, strengthening recent results by Lamberti et al. We also discuss geometric properties of the QJSD. In analogy to Lin’s generalization of the JSD, we also define the general QJSD for a weighting of any number of states and discuss interpretations of both quantities. 1
Emergence of scalefree syntax networks
, 709
"... The evolution of human language allowed the efficient propagation of nongenetic information, thus creating a new form of evolutionary change. Language development in children offers the opportunity of exploring the emergence of such complex communication system and provides a window to understanding ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
The evolution of human language allowed the efficient propagation of nongenetic information, thus creating a new form of evolutionary change. Language development in children offers the opportunity of exploring the emergence of such complex communication system and provides a window to understanding the transition from protolanguage to language. Here we present the first analysis of the emergence of syntax in terms of complex networks. A previously unreported, sharp transition is shown to occur around two years of age from a (presyntactic) treelike structure to a scalefree, small world syntax network. The nature of such transition supports the presence of an innate component pervading the emergence of full syntax. This observation is difficult to interpret in terms of any simple model of network growth, thus suggesting that some internal, perhaps innate component was at work. We explore this problem by using a minimal model that is able to capture several statistical traits. Our results provide evidence for adaptive traits, but it also indicates that some key features of syntax might actually correspond to nonadaptive phenomena.
Two General Games of Information
 In Proceedings of the International Symposium on Information Theory and its Applications
, 2004
"... The Maximum Entropy Principle (MaxEnt) as well as the Minimum Information Divergence Principle (MinDiv) and other optimization principles of information theory and its applications to physics, statistics, economy and other fields are here discussed from the standpoint of twoperson zerosum games. 1 ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
The Maximum Entropy Principle (MaxEnt) as well as the Minimum Information Divergence Principle (MinDiv) and other optimization principles of information theory and its applications to physics, statistics, economy and other fields are here discussed from the standpoint of twoperson zerosum games. 1.
Software Libraries and Their Reuse: Entropy, Kolmogorov Complexity, and Zipf’s Law ∗
, 2005
"... We analyze software reuse from the perspective of information theory and Kolmogorov complexity, assessing our ability to “compress ” programs by expressing them in terms of software components reused from libraries. A common theme in the software reuse literature is that if we can only get the right ..."
Abstract
 Add to MetaCart
We analyze software reuse from the perspective of information theory and Kolmogorov complexity, assessing our ability to “compress ” programs by expressing them in terms of software components reused from libraries. A common theme in the software reuse literature is that if we can only get the right environment in place — the right tools, the right generalizations, economic incentives, a “culture of reuse ” — then reuse of software will soar, with consequent improvements in productivity and software quality. The analysis developed in this paper paints a different picture: the extent to which software reuse can occur is an intrinsic property of a problem domain, and better tools and culture can have only marginal impact on reuse rates if the domain is inherently resistant to reuse. We define an entropy parameter H ∈ [0, 1] of problem domains that measures program diversity, and deduce from this upper bounds on code reuse and the scale of components with which we may work. For “low entropy ” domains with H near 0, programs are highly similar to one another and the domain is amenable to the ComponentBased Software Engineering (CBSE) dream of programming by composing largescale components. For problem domains with H near 1, programs require substantial quantities of new code, with only a modest proportion of an application comprised of reused, smallscale components. Preliminary empirical results from Unix platforms support some of the predictions of our model.
COGNITION AND INFERENCE IN AN ABSTRACT SETTING
"... We continue the development of an abstract, though quantitative, theory of cognition which is rooted in philosophical considerations. Applications include classical Shannon theory and results from geometry. Special attention is payed to inference which is treated as the outcome of a situation of con ..."
Abstract
 Add to MetaCart
We continue the development of an abstract, though quantitative, theory of cognition which is rooted in philosophical considerations. Applications include classical Shannon theory and results from geometry. Special attention is payed to inference which is treated as the outcome of a situation of conflict between Nature and Observer, “you”. 1.