Results 1  10
of
11
Thermodynamics and Garbage Collection
 In ACM Sigplan Notices
, 1994
"... INTRODUCTION Computer scientists should have a knowledge of abstract statistical thermodynamics. First, computer systems are dynamical systems, much like physical systems, and therefore an important first step in their characterization is in finding properties and parameters that are constant over ..."
Abstract

Cited by 11 (0 self)
 Add to MetaCart
INTRODUCTION Computer scientists should have a knowledge of abstract statistical thermodynamics. First, computer systems are dynamical systems, much like physical systems, and therefore an important first step in their characterization is in finding properties and parameters that are constant over time (i.e., constants of motion). Second, statistical thermodynamics successfully reduces macroscopic properties of a system to the statistical behavior of large numbers of microscopic processes. As computer systems become large assemblages of small components, an explanation of their macroscopic behavior may also be obtained as the aggregate statistical behavior of its component parts. If not, the elegance of the statistical thermodynamical approach can at least provide inspiration for new classes of models. 1 Third, the components of computer systems are approaching the same size as the microscopic pr
The common patterns of nature
, 2009
"... We typically observe largescale outcomes that arise from the interactions of many hidden, smallscale processes. Examples include age of disease onset, rates of amino acid substitutions and composition of ecological communities. The macroscopic patterns in each problem often vary around a charact ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
We typically observe largescale outcomes that arise from the interactions of many hidden, smallscale processes. Examples include age of disease onset, rates of amino acid substitutions and composition of ecological communities. The macroscopic patterns in each problem often vary around a characteristic shape that can be generated by neutral processes. A neutral generative model assumes that each microscopic process follows unbiased or random stochastic fluctuations: random connections of network nodes; amino acid substitutions with no effect on fitness; species that arise or disappear from communities randomly. These neutral generative models often match common patterns of nature. In this paper, I present the theoretical background by which we can understand why these neutral generative models are so successful. I show where the classic patterns come from, such as the Poisson pattern, the normal or Gaussian pattern and many others. Each classic pattern was often discovered by a simple neutral generative model. The neutral patterns share a special characteristic: they describe the patterns of nature that follow from simple constraints on information. For example, any aggregation of processes that preserves information only about the mean and variance attracts to the Gaussian pattern; any aggregation that preserves information only about the mean attracts to the exponential pattern; any aggregation that preserves information only about the geometric mean attracts to the power law pattern. I present a simple and consistent informational framework of the common patterns of nature based on the method of maximum entropy. This framework shows that each neutral generative model is a special case that helps to discover a particular set of informational constraints; those informational constraints define a much wider domain of nonneutral generative processes that attract to the same neutral pattern.
The Goldilocks effect: Human infants allocate attention to visual sequences that are neither too simple nor too complex
 PLoS ONE
, 2012
"... Human infants, like immature members of any species, must be highly selective in sampling information from their environment to learn efficiently. Failure to be selective would waste precious computational resources on material that is already known (too simple) or unknowable (too complex). In two e ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
Human infants, like immature members of any species, must be highly selective in sampling information from their environment to learn efficiently. Failure to be selective would waste precious computational resources on material that is already known (too simple) or unknowable (too complex). In two experiments with 7 and 8montholds, we measure infants ’ visual attention to sequences of events varying in complexity, as determined by an ideal learner model. Infants’ probability of looking away was greatest on stimulus items whose complexity (negative log probability) according to the model was either very low or very high. These results suggest a principle of infant attention that may have broad applicability: infants implicitly seek to maintain intermediate rates of information absorption and avoid wasting cognitive resources on overly simple or overly complex events.
Intelligent Machines in the 21st Century: Foundations Of Inference and Inquiry
 Soc. Lond. A
, 2003
"... The last century saw the application of Boolean algebra toward the construction of computing machines, which work by applying logical transformations to information contained in their memory. The development of information theory and the generalization of Boolean algebra to Bayesian inference have e ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
The last century saw the application of Boolean algebra toward the construction of computing machines, which work by applying logical transformations to information contained in their memory. The development of information theory and the generalization of Boolean algebra to Bayesian inference have enabled these computing machines, in the last quarter of the twentieth century, to be endowed with the ability to learn by making inferences from data. This revolution is just beginning as new computational techniques continue to make difficult problems more accessible. Recent advances in understanding the foundations of probability theory have revealed implications for areas other than logic. Of relevance to intelligent machines, we identified the algebra of questions as the free distributive algebra, which now allows us to work with questions in a way analogous to that which Boolean algebra enables us to work with logical statements. In this paper we begin with a history of inferential reasoning, highlighting key concepts that have led to the automation of inference in modern machine learning systems. We then discuss the foundations of inference in more detail using a modern viewpoint that relies on the mathematics of partially ordered sets and the scaffolding of lattice theory. This new viewpoint allows us to develop the logic of inquiry and introduce a measure describing the relevance of a proposed question to an unresolved issue. We will demonstrate the automation of inference, and discuss how this new logic of inquiry will enable intelligent machines to ask questions. Automation of both inference and inquiry promises to allow robots to perform science in the far reaches of our solar system and in other star systems by enabling them not only to make inferences from data, but also to decide which question to ask, experiment to perform, or measurement to take given what they have learned and what they are designed to understand.
doi:10.1111/j.14209101.2008.01647.x REVIEW Natural selection maximizes Fisher information
"... evolution; Fisher’s fundamental theorem; information theory; population genetics. In biology, information flows from the environment to the genome by the process of natural selection. However, it has not been clear precisely what sort of information metric properly describes natural selection. Here, ..."
Abstract
 Add to MetaCart
evolution; Fisher’s fundamental theorem; information theory; population genetics. In biology, information flows from the environment to the genome by the process of natural selection. However, it has not been clear precisely what sort of information metric properly describes natural selection. Here, I show that Fisher information arises as the intrinsic metric of natural selection and evolutionary dynamics. Maximizing the amount of Fisher information about the environment captured by the population leads to Fisher’s fundamental theorem of natural selection, the most profound statement about how natural selection influences evolutionary dynamics. I also show a relation between Fisher information and Shannon information (entropy) that may help to unify the correspondence between information and dynamics. Finally, I discuss
doi:10.1111/j.14209101.2009.01775.x REVIEW The common patterns of nature
"... maximum entropy; neutral theories; population genetics. We typically observe largescale outcomes that arise from the interactions of many hidden, smallscale processes. Examples include age of disease onset, rates of amino acid substitutions and composition of ecological communities. The macroscopi ..."
Abstract
 Add to MetaCart
maximum entropy; neutral theories; population genetics. We typically observe largescale outcomes that arise from the interactions of many hidden, smallscale processes. Examples include age of disease onset, rates of amino acid substitutions and composition of ecological communities. The macroscopic patterns in each problem often vary around a characteristic shape that can be generated by neutral processes. A neutral generative model assumes that each microscopic process follows unbiased or random stochastic fluctuations: random connections of network nodes; amino acid substitutions with no effect on fitness; species that arise or disappear from communities randomly. These neutral generative models often match common patterns of nature. In this paper, I present the theoretical background by which we can understand why these neutral generative models are so successful. I show where the classic patterns come from, such as the Poisson pattern, the normal or Gaussian pattern and many others. Each classic pattern was often discovered by a simple neutral generative model. The neutral patterns share a special characteristic: they describe the patterns of nature that follow from simple constraints on information. For example, any aggregation of processes that preserves information only about the mean and variance attracts to the Gaussian pattern; any aggregation that preserves information only about the mean attracts to the exponential pattern; any aggregation that preserves information only about the geometric mean attracts to the power law pattern. I present a simple and consistent informational framework of the common patterns of nature based on the method of maximum entropy. This framework shows that each neutral generative model is a special case that helps to discover a particular set of informational constraints; those informational constraints define a much wider domain of nonneutral generative processes that attract to the same neutral pattern. In fact, all epistemologic value of the theory of probability is based on this: that largescale random phenomena in their collective action create strict, nonrandom regularity. (Gnedenko
The Common Patterns of Nature
, 906
"... and page numbers.] The published, definitive version of this article is freely available at: ..."
Abstract
 Add to MetaCart
and page numbers.] The published, definitive version of this article is freely available at: