Results 1  10
of
30
Estimating Entropy Rates with Bayesian Confidence Intervals
 NEURAL COMPUTATION 17, 1531–1576 (2005)
, 2005
"... The entropy rate quantifies the amount of uncertainty or disorder produced by any dynamical system. In a spiking neuron, this uncertainty translates into the amount of information potentially encoded and thus the subject of intense theoretical and experimental investigation. Estimating this quantity ..."
Abstract

Cited by 9 (1 self)
 Add to MetaCart
The entropy rate quantifies the amount of uncertainty or disorder produced by any dynamical system. In a spiking neuron, this uncertainty translates into the amount of information potentially encoded and thus the subject of intense theoretical and experimental investigation. Estimating this quantity in observed, experimental data is difficult and requires a judicious selection of probabilistic models, balancing between two opposing biases. We use a model weighting principle originally developed for lossless data compression, following the minimum description length principle. This weighting yields a direct estimator of the entropy rate, which, compared to existing methods, exhibits significantly less bias and converges faster in simulation. With Monte Carlo techinques, we estimate a Bayesian confidence interval for the entropy rate. In related work, we apply these ideas to estimate the information rates between sensory stimuli and neural responses in experimental data (Shlens, Kennel, Abarbanel, & Chichilnisky, 2004).
Information theory of decisions and actions
, 2010
"... The perceptionaction cycle is often defined as “the circular flow of information between an organism and its environment in the course of a sensory guided sequence of actions towards a goal ” (Fuster 2001, 2006). The question we address in this paper is in what sense this “flow of information ” can ..."
Abstract

Cited by 7 (4 self)
 Add to MetaCart
The perceptionaction cycle is often defined as “the circular flow of information between an organism and its environment in the course of a sensory guided sequence of actions towards a goal ” (Fuster 2001, 2006). The question we address in this paper is in what sense this “flow of information ” can be described by Shannon’s measures of information introduced in his mathematical theory of communication. We provide an affirmative answer to this question using an intriguing analogy between Shannon’s classical model of communication and the PerceptionActionCycle. In particular, decision and action sequences turn out to be directly analogous to codes in communication, and their complexity — the minimal number of (binary) decisions required for reaching a goal — directly bounded by information measures, as in communication. This analogy allows us to extend the standard Reinforcement Learning framework. The latter considers the future expected reward in the course of a behaviour sequence towards a goal (valuetogo). Here, we additionally incorporate a measure of information associated with this sequence: the cumulated information processing cost or bandwidth required to specify the future decision and action sequence (informationtogo). Using a graphical model, we derive a recursive Bellman optimality equation for information measures, in analogy to Reinforcement Learning; from this, we obtain new algorithms for calculating the optimal tradeoff between the valuetogo and the required informationtogo, unifying the ideas behind the Bellman and the BlahutArimoto iterations. This tradeoff between valuetogo and informationtogo provides a complete analogy with the compressiondistortion tradeoff in source coding. The present new formulation connects seemingly unrelated optimization problems. The algorithm is demonstrated on grid world examples.
Changing Structures in Midstream: Learning Along the Statistical Garden Path
, 2009
"... Previous studies of auditory statistical learning have typically presented learners with sequential structural information that is uniformly distributed across the entire exposure corpus. Here we present learners with nonuniform distributions of structural information by altering the organization of ..."
Abstract

Cited by 6 (1 self)
 Add to MetaCart
Previous studies of auditory statistical learning have typically presented learners with sequential structural information that is uniformly distributed across the entire exposure corpus. Here we present learners with nonuniform distributions of structural information by altering the organization of trisyllabic nonsense words at midstream. When this structural change was unmarked by lowlevel acoustic cues, or even when cued by a pitch change, only the first of the two structures was learned. However, both structures were learned when there was an explicit cue to the midstream change or when exposure to the second structure was tripled in duration. These results demonstrate that successful extraction of the structure in an auditory statistical learning task reduces the ability to learn subsequent structures, unless the presence of two structures is marked explicitly or the exposure to the second is quite lengthy. The mechanisms by which learners detect and use changes in distributional information to maintain sensitivity to multiple structures are discussed from both behavioral and computational perspectives.
On the Vocabulary of GrammarBased Codes and the Logical Consistency of Texts
, 2008
"... The article presents a new interpretation for Zipf’s law in natural language which relies on two areas of information theory. We reformulate the problem of grammarbased compression and investigate properties of strongly nonergodic stationary processes. The motivation for the joint discussion is to ..."
Abstract

Cited by 4 (3 self)
 Add to MetaCart
The article presents a new interpretation for Zipf’s law in natural language which relies on two areas of information theory. We reformulate the problem of grammarbased compression and investigate properties of strongly nonergodic stationary processes. The motivation for the joint discussion is to prove a proposition with a simple informal statement: If an nletter long text describes n β independent facts in a random but consistent way then the text contains at least n β /log n different words. In the formal statement, two specific postulates are adopted. Firstly, the words are understood as the nonterminal symbols of the shortest grammarbased encoding of the text. Secondly, the texts are assumed to be emitted by a nonergodic source, with the described facts being binary IID variables that are asymptotically predictable in a shiftinvariant way. The proof of the formal proposition applies several new tools. These
Statistical inference using weak chaos and infinite memory
 In Proceedings of the Int’l Workshop on StatisticalMechanical Informatics (IWSMI 2010
, 2010
"... Abstract. We describe a class of deterministic weakly chaotic dynamical systems with infinite memory. These “herding systems ” combine learning and inference into one algorithm, where moments or dataitems are converted directly into an arbitrarily long sequence of pseudosamples. This sequence has i ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
Abstract. We describe a class of deterministic weakly chaotic dynamical systems with infinite memory. These “herding systems ” combine learning and inference into one algorithm, where moments or dataitems are converted directly into an arbitrarily long sequence of pseudosamples. This sequence has infinite range correlations and as such is highly structured. We show that its information content, as measured by subextensive entropy, can grow as fast as K log T, which is faster than the usual 1 2 K log T for exchangeable sequences generated by random posterior sampling from a Bayesian model. In one dimension we prove that herding sequences are equivalent to Sturmian sequences which have complexity exactly log(T + 1). More generally, we advocate the application of the rich theoretical framework around nonlinear dynamical systems, chaos theory and fractal geometry to statistical learning. 1.
Predictive information and emergent cooperativity in a chain of mobile robots
"... Measures of complexity are of immediate interest for the field of autonomous robots both as a means to classify the behavior and as an objective function for the autonomous development of robot behavior. In the present paper we consider predictive information in sensor space as a measure for the beh ..."
Abstract

Cited by 3 (3 self)
 Add to MetaCart
Measures of complexity are of immediate interest for the field of autonomous robots both as a means to classify the behavior and as an objective function for the autonomous development of robot behavior. In the present paper we consider predictive information in sensor space as a measure for the behavioral complexity of a chain of twowheel robots which are passively coupled and controlled by a closedloop reactive controller for each of the individual robots. The predictive information, the mutual information between the past and the future of a time series, is approximated by restricting the time horizons to a single time step. This is exact for Markovian systems but seems to work well also for our robotic system which is strongly nonMarkovian.When in a maze with many obstacles, the approximated predictive information of the sensor values of an individual robot is found to have a clear maximum for a controller which realizes the spontaneous cooperation of the robots in the chain so that large areas of the maze can be visited.
Information Driven Self Organization of Complex Robotic Behaviors
, 2013
"... SFI Working Papers contain accounts of scienti5ic work of the author(s) and do not necessarily represent the views of the Santa Fe Institute. We accept papers intended for publication in peer‐reviewed journals or proceedings volumes, but not papers that have already appeared in print. Except for pa ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
SFI Working Papers contain accounts of scienti5ic work of the author(s) and do not necessarily represent the views of the Santa Fe Institute. We accept papers intended for publication in peer‐reviewed journals or proceedings volumes, but not papers that have already appeared in print. Except for papers by our external faculty, papers must be based on work done at SFI, inspired by an invited visit to or collaboration at SFI, or funded by an SFI grant. ©NOTICE: This working paper is included by permission of the contributing author(s) as a means to ensure timely distribution of the scholarly and technical work on a non‐commercial basis. Copyright and all rights therein are maintained by the author(s). It is understood that all persons copying this information will adhere to the terms and constraints invoked by each author's copyright. These works may be reposted only with the explicit permission of the copyright holder. www.santafe.edu SANTA FE INSTITUTEInformation driven selforganization of complex robotic behaviors
Predictive Coding and the Slowness Principle: An InformationTheoretic Approach
, 2008
"... Understanding the guiding principles of sensory coding strategies is a main goal in computational neuroscience. Among others, the principles of predictive coding and slowness appear to capture aspects of sensory processing. Predictive coding postulates that sensory systems are adapted to the structu ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Understanding the guiding principles of sensory coding strategies is a main goal in computational neuroscience. Among others, the principles of predictive coding and slowness appear to capture aspects of sensory processing. Predictive coding postulates that sensory systems are adapted to the structure of their input signals such that information about future inputs is encoded. Slow feature analysis (SFA) is a method for extracting slowly varying components from quickly varying input signals, thereby learning temporally invariant features. Here, we use the information bottleneck method to state an informationtheoretic objective function for temporally local predictive coding. We then show that the linear case of SFA can be interpreted as a variant of predictive coding that maximizes the mutual information between the current output of the system and the input signal in the next time step. This demonstrates that the slowness principle and predictive coding are intimately related.