Results 1  10
of
23
Classification of Random Boolean Networks
, 2002
"... We provide the first classification of different types of RandomBoolean Networks (RBNs). We study the differences of RBNs depending on the degree of synchronicity and determinism of their updating scheme. For doing so, we first define three new types of RBNs. We note some similarities and difference ..."
Abstract

Cited by 70 (14 self)
 Add to MetaCart
(Show Context)
We provide the first classification of different types of RandomBoolean Networks (RBNs). We study the differences of RBNs depending on the degree of synchronicity and determinism of their updating scheme. For doing so, we first define three new types of RBNs. We note some similarities and differences between different types of RBNs with the aid of a public software laboratory we developed. Particularly, we find that the point attractors are independent of the updating scheme, and that RBNs are more different depending on their determinism or nondeterminism rather than depending on their synchronicity or asynchronicity. We also show a way of mapping nonsynchronous deterministic RBNs into synchronous RBNs. Our results are important for justifying the use of specific types of RBNs for modelling natural phenomena.
A framework for the local information dynamics of distributed computation in complex systems
, 2013
"... ..."
Information storage and transfer in the synchronization process in locallyconnected networks
"... ..."
Measuring Information Storage and Transfer in Swarms
"... Spatial aggregation of animal groups give individuals many benefits that they would not be able to obtain otherwise. One of the key questions in the study of these animal groups, or “swarms”, concerns the way in which information is propagated through the group. In this paper, we examine this propag ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
(Show Context)
Spatial aggregation of animal groups give individuals many benefits that they would not be able to obtain otherwise. One of the key questions in the study of these animal groups, or “swarms”, concerns the way in which information is propagated through the group. In this paper, we examine this propagation using an informationtheoretic framework of distributed computation. Swarm dynamics is interpreted as a type of distributed computation. Two localized informationtheoretic measures (active information storage and transfer entropy) are adapted to the task of tracing the information dynamics in a kinematic context. The observed types of swarm dynamics, as well as transitions among these types, are shown to coincide with wellmarked local and global optima of the proposed measures. Specifically, active information storage tends to maximize as the swarm is becoming less fragmented and the kinematic history begins to strongly inform an observer about the next state. The peak of transfer entropy is observed to appear at the final stages of merging of swarm fragments, near the “edge of chaos ” where the system actively computes its next stable configuration. Both measures tend to minimize for either unstable or static swarm configurations. The results here show these measures can be applied to nontrivial models, most importantly, they can tell us about the dynamics within these model where we can not rely on visual intuitions.
Transfer entropy and transient limits of computation, Scientific reports 4
, 2014
"... Transfer entropy is a recently introduced informationtheoretic measure quantifying directed statistical coherence between spatiotemporal processes, and is widely used in diverse fields ranging from finance to neuroscience. However, its relationships to fundamental limits of computation, such as La ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
(Show Context)
Transfer entropy is a recently introduced informationtheoretic measure quantifying directed statistical coherence between spatiotemporal processes, and is widely used in diverse fields ranging from finance to neuroscience. However, its relationships to fundamental limits of computation, such as Landauer's limit, remain unknown. Here we show that in order to increase transfer entropy (predictability) by one bit, heat flow must match or exceed Landauer's limit. Importantly, we generalise Landauer's limit to bidirectional information dynamics for nonequilibrium processes, revealing that the limit applies to prediction, in addition to retrodiction (information erasure). Furthermore, the results are related to negentropy, and to Bremermann's limit and the Bekenstein bound, producing, perhaps surprisingly, lower bounds on the computational deceleration and information loss incurred during an increase in predictability about the process. The identified relationships set new computational limits in terms of fundamental physical quantities, and establish transfer entropy as a central measure connecting information theory, thermodynamics and theory of computation. Transfer entropy 1 was designed to determine the direction of information transfer between two, possibly coupled, processes, by detecting asymmetry in their interactions. It is a Shannon informationtheoretic quantity 24 which measures a directed relationship between two timeseries processes Y and X. Specifically, the transfer entropy T Y →X measures the average amount of information that states y n at time n of the source timeseries process Y provide about the next states x n+1 of the destination timeseries process X, in the context of the previous state x n of the destination process (see more details in Methods):
Information Dynamics at the Edge of Chaos: Measures, Examples, and Principles
"... Abstract—We survey stateoftheart methods of information dynamics and briefly discuss some of the popular measures, exemplifying their use in different contexts, including cellular automata, swarming behavior, modular robotics, and random Boolean networks. Several possible principles that generali ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
Abstract—We survey stateoftheart methods of information dynamics and briefly discuss some of the popular measures, exemplifying their use in different contexts, including cellular automata, swarming behavior, modular robotics, and random Boolean networks. Several possible principles that generalize the patterns observed in the examples are also suggested. These principles are aimed at providing thermodynamic interpretations of critical phenomena exhibited by many complex Artificial Life systems. I.
Finding Optimal Random Boolean Networks for Reservoir Computing
"... Reservoir Computing (RC) is a computational model in which a trained readout layer interprets the dynamics of a component called a reservoir that is excited by external input stimuli. The reservoir is often constructed using homogeneous neural networks in which a neuron’s indegree distributions as ..."
Abstract
 Add to MetaCart
(Show Context)
Reservoir Computing (RC) is a computational model in which a trained readout layer interprets the dynamics of a component called a reservoir that is excited by external input stimuli. The reservoir is often constructed using homogeneous neural networks in which a neuron’s indegree distributions as well as its functions are uniform. RC lends itself to computing with physical and biological systems. However, most such systems are not homogeneous. In this paper, we use Random Boolean Networks (RBN) to build the reservoir. We explore the computational capabilities of such a RC device using the temporal parity task and the temporal density classification. We study the sufficient dynamics of RBNs using kernel quality and generalization rank measures. We verify findings by Lizier et al. (2008) that the critical connectivity of RBNs optimizes the balance between the high memory capacity of RBNs with 〈K 〉 < 2 and the higher information processing of RBNs with 〈K 〉> 2. We show that in a RBNbased RC system, the optimal connectivity for the parity task, a processing intensive task, and the density classification task, a memory intensive task, agree with Lizier et al.’s theoretical results. Our findings may contribute to the development of optimal selfassembled nanoelectronic computer architectures and biologicallyinspired computing paradigms.
Acknowledgments
"... ingénieur informaticien diplomé EPF de nationalité suisse et originaire de SainteCroix (VD) acceptée sur proposition du jury: Prof. Emre Telatar (EPFL), président du jury Prof. Serge Vaudenay (EPFL), directeur de thèse ..."
Abstract
 Add to MetaCart
(Show Context)
ingénieur informaticien diplomé EPF de nationalité suisse et originaire de SainteCroix (VD) acceptée sur proposition du jury: Prof. Emre Telatar (EPFL), président du jury Prof. Serge Vaudenay (EPFL), directeur de thèse