Results 1  10
of
62
A DNA and restriction enzyme implementation of Turing Machines.
 DIMACS SERIES IN DISCRETE MATHEMATICS AND THEORETICAL COMPUTER SCIENCE
"... Bacteria employ restriction enzymes to cut or restrict DNA at or near specific words in a unique way. Many restriction enzymes cut the two strands of doublestranded DNA at different positions leaving overhangs of singlestranded DNA. Two pieces of DNA may be rejoined or ligated if their terminal ov ..."
Abstract

Cited by 80 (1 self)
 Add to MetaCart
Bacteria employ restriction enzymes to cut or restrict DNA at or near specific words in a unique way. Many restriction enzymes cut the two strands of doublestranded DNA at different positions leaving overhangs of singlestranded DNA. Two pieces of DNA may be rejoined or ligated if their terminal overhangs are complementary. Using these operations fragments of DNA, or oligonucleotides, may be inserted and deleted from a circular piece of plasmid DNA. We propose an encoding for the transition table of a Turing machine in DNA oligonucleotides and a corresponding series of restrictions and ligations of those oligonucleotides that, when performed on circular DNA encoding an instantaneous description of a Turing machine, simulate the operation of the Turing machine encoded in those oligonucleotides. DNA based Turing machines have been proposed by Charles Bennett but they invoke imaginary enzymes to perform the statesymbol transitions. Our approach differs in that every operation can be pe...
Gödel's Theorem and Information
, 1982
"... Gödel's theorem may be demonstrated using arguments having an informationtheoretic flavor. In such an approach it is possible to argue that if a theorem contains more information than a given set of axioms, then it is impossible for the theorem to be derived from the axioms. In contrast with the tr ..."
Abstract

Cited by 53 (6 self)
 Add to MetaCart
Gödel's theorem may be demonstrated using arguments having an informationtheoretic flavor. In such an approach it is possible to argue that if a theorem contains more information than a given set of axioms, then it is impossible for the theorem to be derived from the axioms. In contrast with the traditional proof based on the paradox of the liar, this new viewpoint suggests that the incompleteness phenomenon discovered by Gödel is natural and widespread rather than pathological and unusual.
Unknown quantum states: the quantum de Finetti representation
 J. Math. Phys
"... We present an elementary proof of the quantum de Finetti representation theorem, a quantum analogue of de Finetti’s classical theorem on exchangeable probability assignments. This contrasts with the original proof of Hudson and Moody [Z. Wahrschein. verw. Geb. 33, 343 (1976)], which relies on advanc ..."
Abstract

Cited by 44 (7 self)
 Add to MetaCart
We present an elementary proof of the quantum de Finetti representation theorem, a quantum analogue of de Finetti’s classical theorem on exchangeable probability assignments. This contrasts with the original proof of Hudson and Moody [Z. Wahrschein. verw. Geb. 33, 343 (1976)], which relies on advanced mathematics and does not share the same potential for generalization. The classical de Finetti theorem provides an operational definition of the concept of an unknown probability in Bayesian probability theory, where probabilities are taken to be degrees of belief instead of objective states of nature. The quantum de Finetti theorem, in a closely analogous fashion, deals with exchangeable densityoperator assignments and provides an operational definition of the concept of an “unknown quantum state ” in quantumstate tomography. This result is especially important for informationbased interpretations of quantum mechanics, where quantum states, like probabilities, are taken to be states of knowledge rather than states of nature. We further demonstrate that the theorem fails for real Hilbert spaces and discuss the significance of this point. I.
Information Distance
, 1997
"... While Kolmogorov complexity is the accepted absolute measure of information content in an individual finite object, a similarly absolute notion is needed for the information distance between two individual objects, for example, two pictures. We give several natural definitions of a universal inf ..."
Abstract

Cited by 36 (4 self)
 Add to MetaCart
While Kolmogorov complexity is the accepted absolute measure of information content in an individual finite object, a similarly absolute notion is needed for the information distance between two individual objects, for example, two pictures. We give several natural definitions of a universal information metric, based on length of shortest programs for either ordinary computations or reversible (dissipationless) computations. It turns out that these definitions are equivalent up to an additive logarithmic term. We show that the information distance is a universal cognitive similarity distance. We investigate the maximal correlation of the shortest programs involved, the maximal uncorrelation of programs (a generalization of the SlepianWolf theorem of classical information theory), and the density properties of the discrete metric spaces induced by the information distances. A related distance measures the amount of nonreversibility of a computation. Using the physical theo...
Reversibility and adiabatic computation: trading time and space for energy
 PROC ROYAL SOCIETY OF LONDON, SERIES A
, 1997
"... Future miniaturization and mobilization of computing devices requires energy parsimonious ‘adiabatic’ computation. This is contingent on logical reversibility of computation. An example is the idea of quantum computations which are reversible except for the irreversible observation steps. We propose ..."
Abstract

Cited by 30 (12 self)
 Add to MetaCart
Future miniaturization and mobilization of computing devices requires energy parsimonious ‘adiabatic’ computation. This is contingent on logical reversibility of computation. An example is the idea of quantum computations which are reversible except for the irreversible observation steps. We propose to study quantitatively the exchange of computational resources like time and space for irreversibility in computations. Reversible simulations of irreversible computations are memory intensive. Such (polynomial time) simulations are analysed here in terms of ‘reversible ’ pebble games. We show that Bennett’s pebbling strategy uses least additional space for the greatest number of simulated steps. We derive a tradeoff for storage space versus irreversible erasure. Next we consider reversible computation itself. An alternative proof is provided for the precise expression of the ultimate irreversibility cost of an otherwise reversible computation without restrictions on time and space use. A timeirreversibility tradeoff hierarchy in the exponential time region is exhibited. Finally, extreme timeirreversibility tradeoffs for reversible computations in the thoroughly unrealistic range of computable versus noncomputable timebounds are given.
CRYSTALLINE COMPUTATION
 CHAPTER 18 OF FEYNMAN AND COMPUTATION (A. HEY, ED.)
, 1999
"... Discrete lattice systems have had a long and productive history in physics. Examples range from exact theoretical models studied in statistical mechanics to approximate numerical treatments of continuum models. There has, however, been relatively little attention paid to exact lattice models which o ..."
Abstract

Cited by 28 (7 self)
 Add to MetaCart
Discrete lattice systems have had a long and productive history in physics. Examples range from exact theoretical models studied in statistical mechanics to approximate numerical treatments of continuum models. There has, however, been relatively little attention paid to exact lattice models which obey an invertible dynamics: from any state of the dynamical system you can infer the previous state. This kind of microscopic reversibility is an important property of all microscopic physical dynamics. Invertible lattice systems become even more physically realistic if we impose locality of interaction and exact conservation laws. In fact, some invertible and momentum conserving lattice dynamics—in which discrete particles hop between neighboring lattice sites at discrete times—accurately reproduce hydrodynamics in the macroscopic limit. These kinds of discrete systems not only provide an intriguing informationdynamics approach to modeling macroscopic physics, but they may also be supremely practical. Exactly the same properties that make these models physically realistic also make them efficiently realizable. Algorithms that incorporate constraints such as locality of interaction and invertibility can be run on microscopic physical hardware that shares these constraints. Such hardware can, in principle, achieve a higher density and rate of computation than any other kind of computer. Thus it is interesting to construct discrete lattice dynamics which are more physicslike both in order to capture more of the richness of physical dynamics in informational models, and in order to improve our ability to harness physics for computation. In this chapter, we discuss techniques for bringing discrete lattice dynamics closer to physics, and some of the interesting consequences of doing so.
Physical versus Computational Complementarity I
, 1996
"... The dichotomy between endophysical/intrinsic and exophysical/extrinsic perception concerns the question of how a model  mathematical, logical, computational  universe is perceived from inside or from outside, [71, 65, 66, 59, 60, 68, 67]. This distinction goes back in time at least to Archimedes, ..."
Abstract

Cited by 20 (19 self)
 Add to MetaCart
The dichotomy between endophysical/intrinsic and exophysical/extrinsic perception concerns the question of how a model  mathematical, logical, computational  universe is perceived from inside or from outside, [71, 65, 66, 59, 60, 68, 67]. This distinction goes back in time at least to Archimedes, reported to have asked for a point outside the world from which one could move the earth. An exophysical perception is realized when the system is laid out and the experimenter peeps at the relevant features without changing the system. The information flows on a oneway road: from the system to the experimenter. An endophysical perception can be realized when the experimenter is part of the system under observation. In such a case one has a twoway informational flow; measurements and entities measured are interchangeable and any attempt to distinguish between them ends up as a convention. The general conception dominating the sciences is that the physical universe is perceivable ...
A Structural Approach to Reversible Computation
 Theoretical Computer Science
, 2001
"... Reversibility is a key issue in the interface between computation and physics, and of growing importance as miniaturization progresses towards its physical limits. Most foundational work on reversible computing to date has focussed on simulations of lowlevel machine models. By contrast, we develop ..."
Abstract

Cited by 18 (3 self)
 Add to MetaCart
Reversibility is a key issue in the interface between computation and physics, and of growing importance as miniaturization progresses towards its physical limits. Most foundational work on reversible computing to date has focussed on simulations of lowlevel machine models. By contrast, we develop a more structural approach. We show how highlevel functional programs can be mapped compositionally (i.e. in a syntaxdirected fashion) into a simple kind of automata which are immediately seen to be reversible. The size of the automaton is linear in the size of the functional term. In mathematical terms, we are building a concrete model of functional computation. This construction stems directly from ideas arising in Geometry of Interaction and Linear Logic—but can be understood without any knowledge of these topics. In fact, it serves as an excellent introduction to them. At the same time, an interesting logical delineation between reversible and irreversible forms of computation emerges from our analysis. 1
Time and Space Bounds for Reversible Simulation
, 2001
"... We prove a general upper bound on the tradeoff between time and space that suffices for the reversible simulation of irreversible computation. Previously, only simulations using exponential time or quadratic space were known. The tradeoff shows for the first time that we can simultaneously achiev ..."
Abstract

Cited by 16 (1 self)
 Add to MetaCart
We prove a general upper bound on the tradeoff between time and space that suffices for the reversible simulation of irreversible computation. Previously, only simulations using exponential time or quadratic space were known. The tradeoff shows for the first time that we can simultaneously achieve subexponential time and subquadratic space. The boundary values are the exponential time with hardly any extra space required by the LangeMcKenzieTapp method and the (log 3)th power time with square space required by the Bennett method. We also give the first general lower bound on the extra storage space required by general reversible simulation. This lower bound is optimal in that it is achieved by some reversible simulations. 1 Introduction Computer power has roughly doubled every 18 months for the last halfcentury (Moore's law). This increase in power is due primarily to the continuing miniaturization of the elements of which computers are made, resulting in more and more ele...
CH: Notes on Landauer's principle, reversible computation, and Maxwell's Demon
 Studies in History and Philosophy of Modern Physics 2003
"... Landauer’s principle, often regarded as the basic principle of the thermodynamics of information processing, holds that any logically irreversible manipulation of information, such as the erasure of a bit or the merging of two computation paths, must be accompanied by a corresponding entropy increas ..."
Abstract

Cited by 16 (0 self)
 Add to MetaCart
Landauer’s principle, often regarded as the basic principle of the thermodynamics of information processing, holds that any logically irreversible manipulation of information, such as the erasure of a bit or the merging of two computation paths, must be accompanied by a corresponding entropy increase in noninformationbearing degrees of freedom of the informationprocessing apparatus or its environment. Conversely, it is generally accepted that any logically reversible transformation of information can in principle be accomplished by an appropriate physical mechanism operating in a thermodynamically reversible fashion. These notions have sometimes been criticized either as being false, or as being trivial and obvious, and therefore unhelpful for purposes such as explaining why Maxwell’s Demon cannot violate the second law of thermodynamics. Here I attempt to refute some of the arguments against Landauer’s principle, while arguing that although in a sense it is indeed a straightforward consequence or restatement of the Second Law, it still has considerable pedagogic and explanatory power, especially in the context of other influential ideas in nineteenth and twentieth century physics. Similar arguments have been given by Jeffrey Bub (2002).