### Physical Systems for the Solution of Hard Computational Problems

, 2003

"... We start from Landauer's realization that "information is physical", i.e. that computation cannot be disentangled from the physical system used to perform it, and ask what the capabilities of physical systems really are. In particular, is it possible to design a physical system which is able to solv ..."

Abstract
- Add to MetaCart

We start from Landauer's realization that "information is physical", i.e. that computation cannot be disentangled from the physical system used to perform it, and ask what the capabilities of physical systems really are. In particular, is it possible to design a physical system which is able to solve hard (i.e. NP-complete) problems more e#ciently than conventional computers? Chaotic physical systems (such as the weather) are hard to predict or simulate, but we find that they are also hard to control. The requirement of control turns out to pin down the non-conventional options to either neural networks or quantum computers. Alternatively, we can give up the possibility of control in favour of a system which is basically chaotic, but is able to settle at a solution if it reaches one. However, systems of this type appear inevitably to perform a type of stochastic local search.

### Categories and Subject Descriptors: F.1.1 [Computation by Abstract Devices]: Models of

"... We present a redevelopment of the theory of real-valued recursive functions that was introduced by C. Moore in 1996 by analogy with the standard formulation of the integer-valued recursive functions. While his work opened a new line of research on analog computation, the original paper contained som ..."

Abstract
- Add to MetaCart

We present a redevelopment of the theory of real-valued recursive functions that was introduced by C. Moore in 1996 by analogy with the standard formulation of the integer-valued recursive functions. While his work opened a new line of research on analog computation, the original paper contained some technical inaccuracies. We discuss possible attempts to remove the ambiguity in the behaviour of the operators on partial functions, with a focus on his “primitive recursive” functions generated by the differential recursion operator that solves initial value problems. Under a reasonable reformulation, the functions in this class are shown to be analytic and computable in a strong sense in Computable Analysis. Despite this well-behavedness, the class turns out to be too big to have the originally purported relation to differentially algebraic functions, and hence to C. E. Shannon’s model of analog computation.

### Turing Machines Can Be Efficiently Simulated by the General Purpose Analog Computer

"... Abstract. The Church-Turing thesis states that any sufficiently powerful computational model which captures the notion of algorithm is computationally equivalent to the Turing machine. This equivalence usually holds both at a computability level and at a computational complexity level modulo polynom ..."

Abstract
- Add to MetaCart

Abstract. The Church-Turing thesis states that any sufficiently powerful computational model which captures the notion of algorithm is computationally equivalent to the Turing machine. This equivalence usually holds both at a computability level and at a computational complexity level modulo polynomial reductions. However, the situation is less clear in what concerns models of computation using real numbers, and no analog of the Church-Turing thesis exists for this case. Recently it was shown that some models of computation with real numbers were equivalent from a computability perspective. In particular it was shown that Shannon’s General Purpose Analog Computer (GPAC) is equivalent to Computable Analysis. However, little is known about what happens at a computational complexity level. In this paper we shed some light on the connections between this two models, from a computational complexity level, by showing that, modulo polynomial reductions, computations of Turing machines can be simulated by GPACs, without the need of using more (space) resources than those used in the original Turing computation, as long as we are talking about bounded computations. In other words, computations done by the GPAC are as space-efficient as computations done in the context of Computable Analysis. 1

### Computability of analogue networks

"... We define a general concept of a network of analogue modules connected by channels, processing data from a metric space A, and operating with respect to a global continuous clock. The inputs and outputs of the network are continuous streams u: → A, and the input-output behaviour of the network with ..."

Abstract
- Add to MetaCart

We define a general concept of a network of analogue modules connected by channels, processing data from a metric space A, and operating with respect to a global continuous clock. The inputs and outputs of the network are continuous streams u: → A, and the input-output behaviour of the network with system parameters from A is modelled by a function Φ: C [ , A] p ×A r → C [ , A] q (p, q> 0, r ≥ 0), where C [ , A] is the set of all continuous streams equipped with the compact-open topology. We give an equational specification of the network, and a semantics which involves solving a fixed point equation over C [ , A] using a contraction principle. We analyse two case studies involving mechanical systems. Finally, we introduce a custom-made concrete computation theory over C [ , A] and show that if the modules are concretely computable then so is the function Φ. Key words and phrases: analogue computing, analogue network, concrete computation, continuous tine streams, compact-open topology

### over Metric Algebras

"... Abstract. We define a general concept of a network of analogue modules connected by channels, processing data from a metric space A, and operating with respect to a global continuous clock T. The inputs and outputs of the network are continuous streams u: T → A, and the input-output behaviour of the ..."

Abstract
- Add to MetaCart

Abstract. We define a general concept of a network of analogue modules connected by channels, processing data from a metric space A, and operating with respect to a global continuous clock T. The inputs and outputs of the network are continuous streams u: T → A, and the input-output behaviour of the network with system parameters from A is modelled by a function Φ: C[T,A] p ×A r →C[T,A] q (p, q> 0,r ≥ 0), where C[T,A] is the set of all continuous streams equipped with the compact-open topology. We give an equational specification of the network, and a semantics which involves solving a fixed point equation over C[T,A] using a contraction principle. We analyse a case study involving a mechanical system. Finally, we introduce a custom-made concrete computation theory over C[T,A] and show that if the modules are concretely computable then so is the function Φ. 1

### unknown title

"... The history of control is entwined with the history of analog computing. Many of the tools, technologies, and theories of control were enabled by, or are directly descended from, mechanical and electronic analog computers. As a tool, the MIT differential analyzer [1] was more than a general-purpose, ..."

Abstract
- Add to MetaCart

The history of control is entwined with the history of analog computing. Many of the tools, technologies, and theories of control were enabled by, or are directly descended from, mechanical and electronic analog computers. As a tool, the MIT differential analyzer [1] was more than a general-purpose, differential-equation solver. It was an educational tool and a research touchstone. Vannevar Bush not only sprouted the seeds of analog simulation and the study of servomechanisms in his laboratory but also nurtured a family of early control researchers, including Harold Hazen who coined the word “servo-mechanism ” [2], Gordon Brown [3], and Samuel Caldwell [4]. Bush’s computer was a fountainhead of control and computing [5].

### Certified by........................................................................

, 2010

"... Traditional information theory and communication theory assume that decoders are noiseless and operate without transient or permanent faults. Decoders are also traditionally assumed to be unconstrained in physical resources like materiel, memory, and energy. This thesis studies how constraining reli ..."

Abstract
- Add to MetaCart

Traditional information theory and communication theory assume that decoders are noiseless and operate without transient or permanent faults. Decoders are also traditionally assumed to be unconstrained in physical resources like materiel, memory, and energy. This thesis studies how constraining reliability and resources in the decoder limits the performance of communication systems. Five communication problems are investigated. Broadly speaking these are communication using decoders that are wiring cost-limited, that are memory-limited, that are noisy, that fail catastrophically,