Results 1  10
of
30
Turing Machines, Transition Systems, and Interaction
 Information and Computation
, 2004
"... We present Persistent Turing Machines (PTMs), a new way of interpreting Turingmachine computation, one that is both interactive and persistent. A PTM repeatedly receives an input token from the environment, computes for a while, and then outputs the result. Moreover, it can \remember" its p ..."
Abstract

Cited by 25 (4 self)
 Add to MetaCart
We present Persistent Turing Machines (PTMs), a new way of interpreting Turingmachine computation, one that is both interactive and persistent. A PTM repeatedly receives an input token from the environment, computes for a while, and then outputs the result. Moreover, it can \remember" its previous state (worktape contents) upon commencing a new computation. We show that the class of PTMs is isomorphic to a very general class of eective transition systems, thereby allowing one to view PTMs as transition systems \in disguise." The persistent stream language (PSL) of a PTM is a coinductively dened set of interaction streams : innite sequences of pairs of the form (w i ; w o ), recording, for each interaction with the environment, the input token received by the PTM and the corresponding output token. We dene an innite hierarchy of successively ner equivalences for PTMs over nite interactionstream prexes and show that the limit of this hierarchy does not coincide with PSLequivalence. The presence of this \gap" can be attributed to the fact that the transition systems corresponding to PTM computations naturally exhibit unbounded nondeterminism. We also consider amnesic PTMs, where each new computation begins with a blank work tape, and a corresponding notion of equivalence based on amnesic stream languages (ASLs). We show that the class of ASLs is strictly contained in the class of PSLs. Amnesic stream languages are representative of the classical view of Turingmachine computation. One may consequently conclude that, in a streambased setting, the extension of the Turingmachine model with persistence is a nontrivial one, and provides a formal foundation for reasoning about programming concepts such as objects with static elds. We additional...
Turing's Ideas and Models of Computation
 In Alan Turing: Life and Legacy of a Great
, 2004
"... Summary. The theory of computation that we have inherited from the 1960's focuses on algorithmic computation as embodied in the Turing Machine to the exclusion of other types of computation that Turing had considered. In this chapter we present new models of computation, inspired by Turing&apos ..."
Abstract

Cited by 25 (14 self)
 Add to MetaCart
(Show Context)
Summary. The theory of computation that we have inherited from the 1960's focuses on algorithmic computation as embodied in the Turing Machine to the exclusion of other types of computation that Turing had considered. In this chapter we present new models of computation, inspired by Turing's ideas, that are more appropriate for today's interactive, networked, and embedded computing systems. These models represent superTuring computation, going beyond Turing Machines and algorithms. We identify three principles underlying superTuring computation (interaction with the world, innity of resources, and evolution of system) and apply these principles in our discussion of the implications of superTuring computation for the future of computer science. 1 Introduction: Algorithmic
How can Nature help us compute
 SOFSEM 2006: Theory and Practice of Computer Science – 32nd Conference on Current Trends in Theory and Practice of Computer Science, Merin, Czech Republic, January 21–27
, 2006
"... Abstract. Ever since Alan Turing gave us a machine model of algorithmic computation, there have been questions about how widely it is applicable (some asked by Turing himself). Although the computer on our desk can be viewed in isolation as a Universal Turing Machine, there are many examples in natu ..."
Abstract

Cited by 14 (5 self)
 Add to MetaCart
(Show Context)
Abstract. Ever since Alan Turing gave us a machine model of algorithmic computation, there have been questions about how widely it is applicable (some asked by Turing himself). Although the computer on our desk can be viewed in isolation as a Universal Turing Machine, there are many examples in nature of what looks like computation, but for which there is no wellunderstood model. In many areas, we have to come to terms with emergence not being clearly algorithmic. The positive side of this is the growth of new computational paradigms based on metaphors for natural phenomena, and the devising of very informative computer simulations got from copying nature. This talk is concerned with general questions such as: • Can natural computation, in its various forms, provide us with genuinely new ways of computing? • To what extent can natural processes be captured computationally? • Is there a universal model underlying these new paradigms?
The Interactive Nature of Computing: Refuting the Strong ChurchTuring Thesis
, 2007
"... The classical view of computing positions computation as a closedbox transformation of inputs (rational numbers or finite strings) to outputs. According to the interactive view of computing, computation is an ongoing interactive process rather than a functionbased transformation of an input to a ..."
Abstract

Cited by 14 (0 self)
 Add to MetaCart
The classical view of computing positions computation as a closedbox transformation of inputs (rational numbers or finite strings) to outputs. According to the interactive view of computing, computation is an ongoing interactive process rather than a functionbased transformation of an input to an output. Specifically, communication with the outside world happens during the computation, not before or after it. This approach radically changes our understanding of what is computation and how it is modeled. The acceptance of interaction as a new paradigm is hindered by the Strong ChurchTuring Thesis (SCT), the widespread belief that Turing Machines (TMs) capture all computation, so models of computation more expressive than TMs are impossible. In this paper, we show that SCT reinterprets the original ChurchTuring Thesis (CTT) in a way that Turing never intended; its commonly assumed equivalence to the original is a myth. We identify and analyze the historical reasons for the widespread belief in SCT. Only by accepting that it is false can we begin to adopt interaction as an alternative paradigm of computation. We present Persistent Turing Machines (PTMs), that extend TMs to capture sequential interaction. PTMs allow us to formulate the Sequential Interaction Thesis, going beyond the expressiveness of TMs and of the CTT. The paradigm shift to interaction provides an alternative understanding of the nature of computing that better reflects the services provided by today’s computing technology.
Definability as hypercomputational effect
 Applied Mathematics and Computation
"... The classical simulation of physical processes using standard models of computation is fraught with problems. On the other hand, attempts at modelling realworld computation with the aim of isolating its hypercomputational content have struggled to convince. We argue that a better basic understandin ..."
Abstract

Cited by 7 (6 self)
 Add to MetaCart
(Show Context)
The classical simulation of physical processes using standard models of computation is fraught with problems. On the other hand, attempts at modelling realworld computation with the aim of isolating its hypercomputational content have struggled to convince. We argue that a better basic understanding can be achieved through computability theoretic deconstruction of those physical phenomena most resistant to classical simulation. From this we may be able to better assess whether the hypercomputational enterprise is proleptic computer science, or of mainly philosophical interest.
The Emergent Computational Potential of Evolving Artificial Living Systems
, 2002
"... The computational potential of artificial living systems can be studied without knowing the algorithms that govern their behavior. Modeling single organisms by means of socalled cognitive transducers, we will estimate the computational power of AL systems by viewing them as conglomerates of such org ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
The computational potential of artificial living systems can be studied without knowing the algorithms that govern their behavior. Modeling single organisms by means of socalled cognitive transducers, we will estimate the computational power of AL systems by viewing them as conglomerates of such organisms. We describe a scenario in which an artificial living (AL) system is involved in a potentially infinite, unpredictable interaction with an active or passive environment, to which it can react by learning and adjusting its behaviour. By making use of sequences of cognitive transducers one can also model the evolution of AL systems caused by `architectural' changes. Among the examples are `communities of agents', i.e. by communities of mobile, interactive cognitive transducers.
The Turing OMachine and the DIME Network Architecture: Injecting the Architectural Resiliency into Distributed Computing
"... Turing’s omachine discussed in his PhD thesis can perform all of the usual operations of a Turing machine and in addition, when it is in a certain internal state, can also query an oracle for an answer to a specific question that dictates its further evolution. In his thesis, Turing said 'We s ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
Turing’s omachine discussed in his PhD thesis can perform all of the usual operations of a Turing machine and in addition, when it is in a certain internal state, can also query an oracle for an answer to a specific question that dictates its further evolution. In his thesis, Turing said 'We shall not go any further into the nature of this oracle apart from saying that it cannot be a machine. ’ There is a host of literature discussing the role of the oracle in AI, modeling brain, computing, and hypercomputing machines. In this paper, we take a broader view of the oracle machine inspired by the genetic computing model of cellular organisms and the selforganizing fractal theory. We describe a specific software architecture implementation that circumvents the halting and undecidability problems in a process workflow computation to introduce the architectural resiliency found in cellular organisms into distributed computing machines. A DIME (Distributed Intelligent Computing Element), recently introduced as the building block of the DIME computing model, exploits the concepts from Turing’s oracle machine and extends them to implement a recursive managed distributed computing network, which can be viewed as an interconnected group of such specialized oracle machines, referred to as a DIME network. The DIME network architecture provides the architectural resiliency through autofailover; autoscaling; livemigration; and endtoend transaction security assurance in a distributed system. We demonstrate these characteristics using prototypes without the complexity introduced by hypervisors, virtual machines and other layers of adhoc management software in today’s distributed computing environments.
Lineages of Automata
, 2004
"... While in the series of previous papers we designed and studied a number of models of evolving interactive systems, in the present paper we concentrate on an indepth study of a single model that turned out to be a distinguished model of evolving interactive computing: lineages of automata. A line ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
While in the series of previous papers we designed and studied a number of models of evolving interactive systems, in the present paper we concentrate on an indepth study of a single model that turned out to be a distinguished model of evolving interactive computing: lineages of automata. A lineage consists of a sequence of interactive finite automata, with a mechanism of passing information from each automaton to its immediate successor. In this paper, we develop the theory of lineages. We give some means to construct new lineages out of given ones and prove several properties of translations that are realized by lineages. Lineages enable a definition of a suitable complexity measure for evolving systems. We show several complexity results, including a hierarchy result. Lineages are equivalent to interactive Turing machines with advice, but they stand out because they demonstrate the aspect of evolution explicitly.
The Emergent Computational Potential of Evolving Artificial Living Systems ∗
"... The computational potential of artificial living systems can be studied without knowing the algorithms that govern their behavior. Modeling single organisms by means of socalled cognitive transducers, we will estimate the computational power of AL systems by viewing them as conglomerates of such org ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
The computational potential of artificial living systems can be studied without knowing the algorithms that govern their behavior. Modeling single organisms by means of socalled cognitive transducers, we will estimate the computational power of AL systems by viewing them as conglomerates of such organisms. We describe a scenario in which an artificial living (AL) system is involved in a potentially infinite, unpredictable interaction with an active or passive environment, to which it can react by learning and adjusting its behaviour. By making use of sequences of cognitive transducers one can also model the evolution of AL systems caused by ‘architectural ’ changes. Among the examples are ‘communities of agents’, i.e. by communities of mobile, interactive cognitive transducers. Most AL systems show the emergence of a computational power that is not present at the level of the individual organisms. Indeed, in all but trivial cases the resulting systems possess a superTuring computing power. This means that the systems cannot be simulated by traditional computational models like Turing machines and may in principle solve noncomputable tasks. The results are derived using nonuniform complexity theory. “What we can do is understand some of the general principles of how living things work, and why they exist at all.” From: R. Dawkins, The Blind Watchmaker, 1986. 1