Results 1  10
of
394
Algorithmic information theory
 IBM JOURNAL OF RESEARCH AND DEVELOPMENT
, 1977
"... This paper reviews algorithmic information theory, which is an attempt to apply informationtheoretic and probabilistic ideas to recursive function theory. Typical concerns in this approach are, for example, the number of bits of information required to specify an algorithm, or the probability that ..."
Abstract

Cited by 320 (19 self)
 Add to MetaCart
This paper reviews algorithmic information theory, which is an attempt to apply informationtheoretic and probabilistic ideas to recursive function theory. Typical concerns in this approach are, for example, the number of bits of information required to specify an algorithm, or the probability that a program whose bits are chosen by coin flipping produces a given output. During the past few years the definitions of algorithmic information theory have been reformulated. The basic features of the new formalism are presented here and certain results of R. M. Solovay are reported.
The Evolution of Emergent Computation
, 1995
"... This paper reports the application of new methods for detecting computation in nonlinear processes to a simple evolutionary model that allows us to directly address these questions. The main result is the evolutionary discovery of methods for emergent global computation in a spatially distributed sy ..."
Abstract

Cited by 95 (18 self)
 Add to MetaCart
This paper reports the application of new methods for detecting computation in nonlinear processes to a simple evolutionary model that allows us to directly address these questions. The main result is the evolutionary discovery of methods for emergent global computation in a spatially distributed system consisting of locally interacting processors. We use the general term "emergent computation" to describe the appearance of global informationprocessing in such systems (cf. (6,7)). Our goal is to understand the mechanisms by which evolution can discover methods of emergent computation. We are studying this question in a theoretical framework that, while simplified, still captures the essence of the phenomena of interest. This framework requires (i) an idealized class of decentralized system in which global informationprocessing can arise from the actions of simple, locallyconnected units; (ii) a computational task that necessitates global information processing; and (iii) an idealized computational model of evolution. One of the simplest systems in which emergent computation can be studied is a onedimensional binarystate cellular automaton (CA) (8)  a onedimensional spatial lattice of
PhysicallyBased Visual Simulation on Graphics Hardware
, 2002
"... In this paper, we present a method for realtime visual simulation of diverse dynamic phenomena using programmable graphics hardware. The simulations we implement use an extension of cellular automata known as the coupled map lattice (CML). CML represents the state of a dynamic system as continuous ..."
Abstract

Cited by 86 (5 self)
 Add to MetaCart
In this paper, we present a method for realtime visual simulation of diverse dynamic phenomena using programmable graphics hardware. The simulations we implement use an extension of cellular automata known as the coupled map lattice (CML). CML represents the state of a dynamic system as continuous values on a discrete lattice. In our implementation we store the lattice values in a texture, and use pixellevel programming to implement simple nextstate computations on lattice nodes and their neighbors. We apply these computations successively to produce interactive visual simulations of convection, reactiondiffusion, and boiling. We have built an interactive framework for building and experimenting with CML simulations running on graphics hardware, and have integrated them into interactive 3D graphics applications.
A Survey Of Stream Processing
, 1995
"... Stream processing is a term that is used widely in the literature to describe a variety of systems. We present an overview of the historical development of stream processing and a detailed discussion of the different languages and techniques for programming with streams that can be found in the lite ..."
Abstract

Cited by 85 (2 self)
 Add to MetaCart
Stream processing is a term that is used widely in the literature to describe a variety of systems. We present an overview of the historical development of stream processing and a detailed discussion of the different languages and techniques for programming with streams that can be found in the literature. This includes an analysis of dataflow, specialized functional and logic programming with streams, reactive systems, signal processing systems, and the use of streams in the design and verification of hardware. The aim of this survey is an analysis of the development of each of these specialized topics to determine if a general theory of stream processing has emerged. As such, we discuss and classify the different classes of stream processing systems found in the literature from the perspective of programming primitives, implementation techniques, and computability issues, including a comparison of the semantic models that are used to formalize stream based computation.
Special Purpose Parallel Computing
 Lectures on Parallel Computation
, 1993
"... A vast amount of work has been done in recent years on the design, analysis, implementation and verification of special purpose parallel computing systems. This paper presents a survey of various aspects of this work. A long, but by no means complete, bibliography is given. 1. Introduction Turing ..."
Abstract

Cited by 77 (5 self)
 Add to MetaCart
A vast amount of work has been done in recent years on the design, analysis, implementation and verification of special purpose parallel computing systems. This paper presents a survey of various aspects of this work. A long, but by no means complete, bibliography is given. 1. Introduction Turing [365] demonstrated that, in principle, a single general purpose sequential machine could be designed which would be capable of efficiently performing any computation which could be performed by a special purpose sequential machine. The importance of this universality result for subsequent practical developments in computing cannot be overstated. It showed that, for a given computational problem, the additional efficiency advantages which could be gained by designing a special purpose sequential machine for that problem would not be great. Around 1944, von Neumann produced a proposal [66, 389] for a general purpose storedprogram sequential computer which captured the fundamental principles of...
Promises and Challenges of Evolvable Hardware
, 1996
"... Evolvable hardware (EHW) has attracted increasing attention since early 1990's with the advent of easily reconfigurable hardware such as field programmable gate arrays (FPGAs). It promises to provide an entirely new approach to complex electronic circuit design and new adaptive hardware. EHW has bee ..."
Abstract

Cited by 65 (4 self)
 Add to MetaCart
Evolvable hardware (EHW) has attracted increasing attention since early 1990's with the advent of easily reconfigurable hardware such as field programmable gate arrays (FPGAs). It promises to provide an entirely new approach to complex electronic circuit design and new adaptive hardware. EHW has been demonstrated to be able to perform a wide range of tasks from pattern recognition to adaptive control. However, there are still many fundamental issues in EHW which remain open. This paper reviews the current status of EHW, discusses the promises and possible advantages of EHW, and indicates the challenges we must meet in order to develop practical and largescale EHW. 1 Introduction Evolvable hardware (EHW) refers to hardware that can change its architecture and behaviour dynamically and autonomously by interacting with its environment. At present, almost all EHW uses an evolutionary algorithm (EA) as their main adaptive mechanism. One of the key motivations behind EHW is to learn from N...
Optimal Ordered Problem Solver
, 2002
"... We present a novel, general, optimally fast, incremental way of searching for a universal algorithm that solves each task in a sequence of tasks. The Optimal Ordered Problem Solver (OOPS) continually organizes and exploits previously found solutions to earlier tasks, eciently searching not only the ..."
Abstract

Cited by 62 (20 self)
 Add to MetaCart
We present a novel, general, optimally fast, incremental way of searching for a universal algorithm that solves each task in a sequence of tasks. The Optimal Ordered Problem Solver (OOPS) continually organizes and exploits previously found solutions to earlier tasks, eciently searching not only the space of domainspecific algorithms, but also the space of search algorithms. Essentially we extend the principles of optimal nonincremental universal search to build an incremental universal learner that is able to improve itself through experience.
COMPLEXITY OF SELFASSEMBLED SHAPES
, 2007
"... The connection between selfassembly and computation suggests that a shape can be considered the output of a selfassembly “program,” a set of tiles that fit together to create a shape. It seems plausible that the size of the smallest selfassembly program that builds a shape and the shape’s descrip ..."
Abstract

Cited by 61 (4 self)
 Add to MetaCart
The connection between selfassembly and computation suggests that a shape can be considered the output of a selfassembly “program,” a set of tiles that fit together to create a shape. It seems plausible that the size of the smallest selfassembly program that builds a shape and the shape’s descriptional (Kolmogorov) complexity should be related. We show that when using a notion of a shape that is independent of scale, this is indeed so: in the tile assembly model, the minimal number of distinct tile types necessary to selfassemble a shape, at some scale, can be bounded both above and below in terms of the shape’s Kolmogorov complexity. As part of the proof, we develop a universal constructor for this model of selfassembly that can execute an arbitrary Turing machine program specifying how to grow a shape. Our result implies, somewhat counterintuitively, that selfassembly of a scaledup version of a shape often requires fewer tile types. Furthermore, the independence of scale in selfassembly theory appears to play the same crucial role as the independence of running time in the theory of computability. This leads to an elegant formulation of languages of shapes generated by selfassembly. Considering functions from bit strings to shapes, we show that the runningtime complexity, with respect to Turing machines, is polynomially equivalent to the scale complexity of the same function implemented via selfassembly by a finite set of tile types. Our results also hold for shapes defined by Wang tiling—where there is no sense of a selfassembly process—except that here time complexity must be measured with respect to nondeterministic Turing machines.
Toward robust integrated circuits: The embryonics approach
 Proceedings of the IEEE
, 2000
"... The growth and operation of all living beings are directed by the interpretation, in each of their cells, of a chemical program, the DNA string or genome. This process is the source of inspiration for the Embryonics (embryonic electronics) project, whose final objective is the design of highly robus ..."
Abstract

Cited by 56 (13 self)
 Add to MetaCart
The growth and operation of all living beings are directed by the interpretation, in each of their cells, of a chemical program, the DNA string or genome. This process is the source of inspiration for the Embryonics (embryonic electronics) project, whose final objective is the design of highly robust integrated circuits, endowed with properties usually associated with the living world: selfrepair (cicatrization) and selfreplication. The Embryonics architecture is based on four hierarchical levels of organization. 1) The basic primitive of our system is the molecule, a multiplexerbased element of a novel programmable circuit. 2) A finite set of molecules makes up a cell, essentially a small processor with an associated memory. 3) A finite set of cells makes up an organism, an applicationspecific multiprocessor system. 4) The organism can itself replicate, giving rise to a population of identical organisms. We begin by describing in detail the implementation of an artificial cell characterized by
Gödel's Theorem and Information
, 1982
"... Gödel's theorem may be demonstrated using arguments having an informationtheoretic flavor. In such an approach it is possible to argue that if a theorem contains more information than a given set of axioms, then it is impossible for the theorem to be derived from the axioms. In contrast with the tr ..."
Abstract

Cited by 53 (6 self)
 Add to MetaCart
Gödel's theorem may be demonstrated using arguments having an informationtheoretic flavor. In such an approach it is possible to argue that if a theorem contains more information than a given set of axioms, then it is impossible for the theorem to be derived from the axioms. In contrast with the traditional proof based on the paradox of the liar, this new viewpoint suggests that the incompleteness phenomenon discovered by Gödel is natural and widespread rather than pathological and unusual.